On 12/11/2012 11:09 PM, Cristian Falcas wrote:
<mailto:iheim@redhat.com>> wrote:
On 12/09/2012 11:22 PM, Cristian Falcas wrote:
On Sun, Dec 9, 2012 at 5:37 PM, Roy Golan
<rgolan@redhat.com <mailto:rgolan@redhat.com><mailto:Users@ovirt.org <mailto:Users@ovirt.org>><mailto:rgolan@redhat.com <mailto:rgolan@redhat.com>>> wrote:
On 12/09/2012 05:17 PM, Cristian Falcas wrote:
Hi,
I get this error with the nightly builds when I
start a VM:
please paste the output of the following command to
see if
Opteron_G3 is really supported:
vdsClient 0 -s getVdsCaps
libvirtError: internal error Cannot find suitable
CPU model for
given data
Log data:
Thread-654::DEBUG::2012-12-09
17:14:18,120::libvirtvm::1485::vm.Vm::(_run)
vmId=`a4a8f349-7fdf-42f4-873e-e70f692c6ca2`::<?xml
version="1.0"
encoding="utf-8"?>
<domain type="kvm">
<name>q</name>
<uuid>a4a8f349-7fdf-42f4-873e-e70f692c6ca2</uuid>
<memory>524288</memory>
<currentMemory>524288</currentMemory>
<vcpu>2</vcpu>
<devices>
<channel type="unix">
<target
name="com.redhat.rhevm.vdsm"
type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/q.com.redhat.rhevm.vdsm"/>
</channel>
<channel type="unix">
<target
name="org.qemu.guest_agent.0"
type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/q.org.qemu.guest_agent.0"/>
</channel>
<input bus="ps2" type="mouse"/>
<channel type="spicevmc">
<target
name="com.redhat.spice.0"
type="virtio"/>
</channel>
<graphics autoport="yes"
keymap="en-us" listen="0"
passwd="*****" passwdValidTo="1970-01-01T00:00:01"
port="-1"
tlsPort="-1" type="spice">
<channel mode="secure"
name="main"/>
<channel mode="secure"
name="inputs"/>
<channel mode="secure"
name="cursor"/>
<channel mode="secure"
name="playback"/>
<channel mode="secure"
name="record"/>
<channel mode="secure"
name="display"/>
<channel mode="secure"
name="usbredir"/>
<channel mode="secure"
name="smartcard"/>
</graphics>
<console type="pty">
<target port="0"
type="virtio"/>
</console>
<sound model="ac97"/>
<video>
<model heads="1"
type="qxl" vram="65536"/>
</video>
<interface type="bridge">
<mac
address="00:1a:4a:6f:6f:f4"/>
<model type="virtio"/>
<source bridge="ovirtmgmt"/>
<filterref
filter="vdsm-no-mac-spoofing"/>
</interface>
<memballoon model="virtio"/>
<disk device="cdrom" snapshot="no"
type="file">
<source
file="/rhev/data-center/21ddcd50-aba8-461a-9ecf-c5762af89355/4f6a2b90-9c70-45e5-8b17-5274ee97ce73/images/11111111-1111-1111-1111-111111111111/CentOS-6.3-x86_64-bin-DVD1.iso"
startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<readonly/>
<serial></serial>
<boot order="1"/>
</disk>
<disk device="disk" snapshot="no"
type="file">
<source
file="/rhev/data-center/21ddcd50-aba8-461a-9ecf-c5762af89355/81361e6d-2b58-4781-80c2-d908a0fe91cd/images/ffa8728f-6f0c-4b59-99ac-5bef0bd7634e/80a8701a-bf07-4d8a-8d02-8f98e6bb46a1"/>
<target bus="virtio"
dev="vda"/>
<serial>ffa8728f-6f0c-4b59-99ac-5bef0bd7634e</serial>
<driver cache="none"
error_policy="stop"
io="threads" name="qemu" type="raw"/>
</disk>
</devices>
<os>
<type arch="x86_64"
machine="pc-0.14">hvm</type>
<smbios mode="sysinfo"/>
</os>
<sysinfo type="smbios">
<system>
<entry
name="manufacturer">oVirt</entry>
<entry
name="product">oVirt Node</entry>
<entry
name="version">17-1</entry>
<entry
name="serial">30303146-4430-3946-3139-3938FFFFFFFF</entry>
<entry
name="uuid">a4a8f349-7fdf-42f4-873e-e70f692c6ca2</entry>
</system>
</sysinfo>
<clock adjustment="-43200" offset="variable">
<timer name="rtc"
tickpolicy="catchup"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>Opteron_G3</model>
<topology cores="1" sockets="2"
threads="1"/>
</cpu>
</domain>
Thread-654::DEBUG::2012-12-09
17:14:18,152::vm::672::vm.Vm::(_startUnderlyingVm)
vmId=`a4a8f349-7fdf-42f4-873e-e70f692c6ca2`::_ongoingCreations
released
Thread-654::ERROR::2012-12-09
17:14:18,152::vm::696::vm.Vm::(_startUnderlyingVm)
vmId=`a4a8f349-7fdf-42f4-873e-e70f692c6ca2`::The
vm start process
failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 658, in
_startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1511,
in _run
self._connection.createXML(domxml, flags),
File
"/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 111, in wrapper
ret = f(*args, **kwargs)
File
"/usr/lib64/python2.7/site-packages/libvirt.py", line
2633,
in createXML
if ret is None:raise
libvirtError('virDomainCreateXML()
failed', conn=self)
libvirtError: internal error Cannot find suitable
CPU model for
given data
Thread-654::DEBUG::2012-12-09
17:14:18,156::vm::1045::vm.Vm::(setDownStatus)
vmId=`a4a8f349-7fdf-42f4-873e-e70f692c6ca2`::Changed
state to
Down: internal error Cannot find suitable CPU
model for given data
_______________________________________________
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org><mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org <mailto:Users@ovirt.org>Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
The command
vdsClient 0 -s getVdsCaps
didn't work
Is this the same thing?
[root@localhost cristi]# vdsClient -s localhost getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:74bc7d9e76f3'}], 'FC': []}
ISCSIInitiatorName =
iqn.1994-05.com.redhat:74bc7d9e76f3
bondings = {'bond4': {'addr': '', 'cfg': {},
'mtu': '1500',
'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}, 'bond0':
{'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',
'slaves': [],
'hwaddr': '00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '10.20.20.20',
'cfg': {'VLAN':
'yes', 'IPV6INIT': 'no', 'IPADDR0': '10.20.20.20', 'DNS1':
'8.8.8.8',
'PREFIX0': '24', 'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL':
'no', 'DELAY':
'0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'GATEWAY0':
'10.20.20.1', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
'ONBOOT': 'yes',
'UUID': 'd0d0c1c1-6625-4be3-ab1e-d879c870d577'}, 'mtu':
'1500',
'netmask': '255.255.255.0', 'stp': 'off', 'ports': ['p10p1']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 4
cpuFlags =
fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,hw_pstate,npt,lbrv,svm_lock,nrip_save,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2
cpuModel = AMD Phenom(tm) II X4 965 Processor
cpuSockets = 1
cpuSpeed = 3400.000
emulatedMachines = ['pc-1.3', 'none', 'pc',
'pc-1.2', 'pc-1.1',
'pc-1.0', 'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12',
'pc-0.11',
'pc-0.10', 'isapc', 'q35', 'q35-next', 'pc-1.3', 'none',
'pc', 'pc-1.2',
'pc-1.1', 'pc-1.0', 'pc-0.15', 'pc-0.14', 'pc-0.13',
'pc-0.12',
'pc-0.11', 'pc-0.10', 'isapc', 'q35', 'q35-next']
guestOverhead = 65
hooks = {'before_vm_start': {'50_floppy': {'md5':
'882a5b1a0d303102765d07c7ff6ae0fc'}, '50_hostusb': {'md5':
'06110c7365ff3b71d40bfc9bad2adc05'},
'50_isolatedprivatevlan': {'md5':
'acc557aadb52442bd43bfa2f7a2b42a1'}},
'before_vm_migrate_source':
{'50_floppy': {'md5': '388f180317bd8a121a512a8b26af6bf5'},
'50_hostusb':
{'md5': 'e50a0975acd8bafd0052b65a787c2fdd'}},
'after_vm_destroy':
{'50_hostusb': {'md5': 'cbf5ef4aed1e3cbe3ef9e30694ea6607'}}}
kvmEnabled = true
lastClient = 0.0.0.0
lastClientIface = ppp0
management_ip =
memSize = 5968
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt',
'addr':
'10.20.20.20', 'cfg': {'VLAN': 'yes', 'IPV6INIT': 'no',
'IPADDR0':
'10.20.20.20', 'DNS1': '8.8.8.8', 'PREFIX0': '24',
'DEFROUTE': 'yes',
'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0', 'NM_CONTROLLED':
'no',
'BOOTPROTO': 'none', 'GATEWAY0': '10.20.20.1', 'DEVICE':
'ovirtmgmt',
'TYPE': 'Bridge', 'ONBOOT': 'yes', 'UUID':
'd0d0c1c1-6625-4be3-ab1e-d879c870d577'}, 'mtu': '1500',
'netmask':
'255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
'0.0.0.0',
'ports': ['p10p1']}}
nics = {'p10p1': {'addr': '', 'cfg': {'BRIDGE':
'ovirtmgmt',
'DEVICE': 'p10p1', 'IPV6INIT': 'no', 'IPADDR0':
'10.20.20.20', 'DNS1':
'8.8.8.8', 'PREFIX0': '24', 'DEFROUTE': 'yes',
'IPV4_FAILURE_FATAL':
'no', 'NM_CONTROLLED': 'no', 'GATEWAY0': '10.20.20.1',
'HWADDR':
'00:1f:d0:9f:19:98', 'ONBOOT': 'yes', 'UUID':
'd0d0c1c1-6625-4be3-ab1e-d879c870d577'}, 'mtu': '1500',
'netmask': '',
'hwaddr': '00:1f:d0:9f:19:98', 'speed': 100}}
operatingSystem = {'release': '1', 'version':
'17', 'name':
'Fedora'}
packages2 = {'kernel': {'release': '2.fc17.x86_64',
'buildtime': 1354620364.0, 'version': '3.6.9'},
'spice-server':
{'release': '1.fc17', 'buildtime': 1348891802, 'version':
'0.12.0'},
'vdsm': {'release': '0.89.git922366a.fc17', 'buildtime':
1354997600,
'version': '4.10.2'}, 'qemu-kvm': {'release': '1.fc17',
'buildtime':
1355029612, 'version': '1.3.0'}, 'libvirt': {'release':
'1.fc17',
'buildtime': 1352437629, 'version': '1.0.0'}, 'qemu-img':
{'release':
'1.fc17', 'buildtime': 1355029612, 'version': '1.3.0'}, 'mom':
{'release': '1.fc17', 'buildtime': 1354651873, 'version':
'0.3.0'}}
reservedMem = 321
software_revision = 0.89
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 30303146-4430-3946-3139-3938FFFFFFFF
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
_______________________________________________
Users mailing list
http://lists.ovirt.org/mailman/listinfo/users
according to vdsClient -s 0 GetVdsCaps vdsm does report
opteron G3.nx flag must be enabled (and it seems it does).
vdsm/ovirt doesn't check for aes flag.
waiting to hear if libvirt downgrade helped
It's the same problem with libvirt-0.10.2 also.
I checked the flags from /proc/cpuinfo and the ones from vdsClient
-s 0 getVdsCaps.
The only difference is that vdsClient has those extra flags:
model_Opteron_G1
model_Opteron_G2
model_Opteron_G3
model_athlon
model_phenom
I don't know if they should appear in the cpuFlags section. From
other outputs on the internet, there was no model_*.
the output valid- a model_Opteron_G3 is saying your server can
emulate G3. need the libvirt.log for more info and also worth
checking the link ovedo previously pasted
Best regards,
Cristian Falcas
Hi,
I didn't find any options in the bios to disable nx. But it worked a few
versions earlier.
i attached all the logs from vdsm+libvirt.
Best regards,
Cristian Falcas
just to be clear - you shouldn't disable nx in the bios.