[Users] Cannot find suitable CPU model for given data
Cristian Falcas
cristi.falcas at gmail.com
Wed Dec 12 20:39:46 UTC 2012
On Wed, Dec 12, 2012 at 8:04 PM, Cristian Falcas <cristi.falcas at gmail.com>wrote:
>
>
>
> On Tue, Dec 11, 2012 at 11:17 PM, Itamar Heim <iheim at redhat.com> wrote:
>
>> On 12/11/2012 11:09 PM, Cristian Falcas wrote:
>>
>>>
>>>
>>>
>>> On Mon, Dec 10, 2012 at 2:22 PM, Roy Golan <rgolan at redhat.com
>>> <mailto:rgolan at redhat.com>> wrote:
>>>
>>> On 12/10/2012 02:04 PM, Cristian Falcas wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Mon, Dec 10, 2012 at 9:30 AM, Itamar Heim <iheim at redhat.com
>>>> <mailto:iheim at redhat.com>> wrote:
>>>>
>>>> On 12/09/2012 11:22 PM, Cristian Falcas wrote:
>>>>
>>>>
>>>>
>>>>
>>>> On Sun, Dec 9, 2012 at 5:37 PM, Roy Golan
>>>> <rgolan at redhat.com <mailto:rgolan at redhat.com>
>>>> <mailto:rgolan at redhat.com <mailto:rgolan at redhat.com>>>
>>>> wrote:
>>>>
>>>> On 12/09/2012 05:17 PM, Cristian Falcas wrote:
>>>>
>>>> Hi,
>>>>
>>>> I get this error with the nightly builds when I
>>>> start a VM:
>>>>
>>>>
>>>> please paste the output of the following command to
>>>> see if
>>>> Opteron_G3 is really supported:
>>>> vdsClient 0 -s getVdsCaps
>>>>
>>>>
>>>> libvirtError: internal error Cannot find suitable
>>>> CPU model for
>>>> given data
>>>>
>>>>
>>>> Log data:
>>>> Thread-654::DEBUG::2012-12-09
>>>> 17:14:18,120::libvirtvm::1485:**:vm.Vm::(_run)
>>>> vmId=`a4a8f349-7fdf-42f4-873e-**
>>>> e70f692c6ca2`::<?xml
>>>> version="1.0"
>>>> encoding="utf-8"?>
>>>> <domain type="kvm">
>>>> <name>q</name>
>>>> <uuid>a4a8f349-7fdf-42f4-873e-**e70f692c6ca2</uuid>
>>>> <memory>524288</memory>
>>>> <currentMemory>524288</**currentMemory>
>>>> <vcpu>2</vcpu>
>>>> <devices>
>>>> <channel type="unix">
>>>> <target
>>>> name="com.redhat.rhevm.vdsm"
>>>> type="virtio"/>
>>>> <source mode="bind"
>>>>
>>>> path="/var/lib/libvirt/qemu/**
>>>> channels/q.com.redhat.rhevm.**vdsm"/>
>>>> </channel>
>>>> <channel type="unix">
>>>> <target
>>>> name="org.qemu.guest_agent.0"
>>>> type="virtio"/>
>>>> <source mode="bind"
>>>>
>>>> path="/var/lib/libvirt/qemu/**
>>>> channels/q.org.qemu.guest_**agent.0"/>
>>>> </channel>
>>>> <input bus="ps2" type="mouse"/>
>>>> <channel type="spicevmc">
>>>> <target
>>>> name="com.redhat.spice.0"
>>>> type="virtio"/>
>>>> </channel>
>>>> <graphics autoport="yes"
>>>> keymap="en-us" listen="0"
>>>> passwd="*****" passwdValidTo="1970-01-01T00:**
>>>> 00:01"
>>>> port="-1"
>>>> tlsPort="-1" type="spice">
>>>> <channel mode="secure"
>>>> name="main"/>
>>>> <channel mode="secure"
>>>> name="inputs"/>
>>>> <channel mode="secure"
>>>> name="cursor"/>
>>>> <channel mode="secure"
>>>> name="playback"/>
>>>> <channel mode="secure"
>>>> name="record"/>
>>>> <channel mode="secure"
>>>> name="display"/>
>>>> <channel mode="secure"
>>>> name="usbredir"/>
>>>> <channel mode="secure"
>>>> name="smartcard"/>
>>>> </graphics>
>>>> <console type="pty">
>>>> <target port="0"
>>>> type="virtio"/>
>>>> </console>
>>>> <sound model="ac97"/>
>>>> <video>
>>>> <model heads="1"
>>>> type="qxl" vram="65536"/>
>>>> </video>
>>>> <interface type="bridge">
>>>> <mac
>>>> address="00:1a:4a:6f:6f:f4"/>
>>>> <model type="virtio"/>
>>>> <source bridge="ovirtmgmt"/>
>>>> <filterref
>>>> filter="vdsm-no-mac-spoofing"/**>
>>>> </interface>
>>>> <memballoon model="virtio"/>
>>>> <disk device="cdrom" snapshot="no"
>>>> type="file">
>>>> <source
>>>>
>>>> file="/rhev/data-center/**21ddcd50-aba8-461a-9ecf-**
>>>> c5762af89355/4f6a2b90-9c70-**45e5-8b17-5274ee97ce73/images/**
>>>> 11111111-1111-1111-1111-**111111111111/CentOS-6.3-x86_**
>>>> 64-bin-DVD1.iso"
>>>> startupPolicy="optional"/>
>>>> <target bus="ide"
>>>> dev="hdc"/>
>>>> <readonly/>
>>>> <serial></serial>
>>>> <boot order="1"/>
>>>> </disk>
>>>> <disk device="disk" snapshot="no"
>>>> type="file">
>>>> <source
>>>>
>>>> file="/rhev/data-center/**21ddcd50-aba8-461a-9ecf-**
>>>> c5762af89355/81361e6d-2b58-**4781-80c2-d908a0fe91cd/images/**
>>>> ffa8728f-6f0c-4b59-99ac-**5bef0bd7634e/80a8701a-bf07-**
>>>> 4d8a-8d02-8f98e6bb46a1"/>
>>>> <target bus="virtio"
>>>> dev="vda"/>
>>>> <serial>ffa8728f-6f0c-4b59-**
>>>> 99ac-5bef0bd7634e</serial>
>>>> <driver cache="none"
>>>> error_policy="stop"
>>>> io="threads" name="qemu" type="raw"/>
>>>> </disk>
>>>> </devices>
>>>> <os>
>>>> <type arch="x86_64"
>>>> machine="pc-0.14">hvm</type>
>>>> <smbios mode="sysinfo"/>
>>>> </os>
>>>> <sysinfo type="smbios">
>>>> <system>
>>>> <entry
>>>> name="manufacturer">oVirt</**entry>
>>>> <entry
>>>> name="product">oVirt Node</entry>
>>>> <entry
>>>> name="version">17-1</entry>
>>>> <entry
>>>>
>>>> name="serial">30303146-4430-**
>>>> 3946-3139-3938FFFFFFFF</entry>
>>>> <entry
>>>>
>>>> name="uuid">a4a8f349-7fdf-**
>>>> 42f4-873e-e70f692c6ca2</entry>
>>>> </system>
>>>> </sysinfo>
>>>> <clock adjustment="-43200"
>>>> offset="variable">
>>>> <timer name="rtc"
>>>> tickpolicy="catchup"/>
>>>> </clock>
>>>> <features>
>>>> <acpi/>
>>>> </features>
>>>> <cpu match="exact">
>>>> <model>Opteron_G3</model>
>>>> <topology cores="1" sockets="2"
>>>> threads="1"/>
>>>> </cpu>
>>>> </domain>
>>>>
>>>> Thread-654::DEBUG::2012-12-09
>>>> 17:14:18,152::vm::672::vm.Vm::**
>>>> (_startUnderlyingVm)
>>>>
>>>> vmId=`a4a8f349-7fdf-42f4-873e-**e70f692c6ca2`::_**
>>>> ongoingCreations
>>>> released
>>>> Thread-654::ERROR::2012-12-09
>>>> 17:14:18,152::vm::696::vm.Vm::**
>>>> (_startUnderlyingVm)
>>>> vmId=`a4a8f349-7fdf-42f4-873e-**e70f692c6ca2`::The
>>>> vm start process
>>>> failed
>>>> Traceback (most recent call last):
>>>> File "/usr/share/vdsm/vm.py", line 658, in
>>>> _startUnderlyingVm
>>>> self._run()
>>>> File "/usr/share/vdsm/libvirtvm.py"**, line 1511,
>>>> in _run
>>>> self._connection.createXML(**domxml, flags),
>>>> File
>>>>
>>>> "/usr/lib64/python2.7/site-**packages/vdsm/**
>>>> libvirtconnection.py",
>>>> line 111, in wrapper
>>>> ret = f(*args, **kwargs)
>>>> File
>>>> "/usr/lib64/python2.7/site-**packages/libvirt.py", line
>>>> 2633,
>>>> in createXML
>>>> if ret is None:raise
>>>> libvirtError('**virDomainCreateXML()
>>>> failed', conn=self)
>>>> libvirtError: internal error Cannot find suitable
>>>> CPU model for
>>>> given data
>>>> Thread-654::DEBUG::2012-12-09
>>>> 17:14:18,156::vm::1045::vm.Vm:**:(setDownStatus)
>>>>
>>>> vmId=`a4a8f349-7fdf-42f4-873e-**e70f692c6ca2`::Changed
>>>> state to
>>>> Down: internal error Cannot find suitable CPU
>>>> model for given data
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> Users mailing list
>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>>>>
>>>> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> Users mailing list
>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>> <mailto:Users at ovirt.org <mailto:Users at ovirt.org>>
>>>>
>>>>
>>>> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>>>>
>>>>
>>>>
>>>>
>>>> The command
>>>> vdsClient 0 -s getVdsCaps
>>>> didn't work
>>>>
>>>> Is this the same thing?
>>>>
>>>> [root at localhost cristi]# vdsClient -s localhost getVdsCaps
>>>> HBAInventory = {'iSCSI': [{'InitiatorName':
>>>> 'iqn.1994-05.com.redhat:**74bc7d9e76f3'}], 'FC': []}
>>>> ISCSIInitiatorName =
>>>> iqn.1994-05.com.redhat:**74bc7d9e76f3
>>>> bondings = {'bond4': {'addr': '', 'cfg': {},
>>>> 'mtu': '1500',
>>>> 'netmask': '', 'slaves': [], 'hwaddr':
>>>> '00:00:00:00:00:00'}, 'bond0':
>>>> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '',
>>>> 'slaves': [],
>>>> 'hwaddr': '00:00:00:00:00:00'}}
>>>> bridges = {'ovirtmgmt': {'addr': '10.20.20.20',
>>>> 'cfg': {'VLAN':
>>>> 'yes', 'IPV6INIT': 'no', 'IPADDR0': '10.20.20.20', 'DNS1':
>>>> '8.8.8.8',
>>>> 'PREFIX0': '24', 'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL':
>>>> 'no', 'DELAY':
>>>> '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'GATEWAY0':
>>>> '10.20.20.1', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge',
>>>> 'ONBOOT': 'yes',
>>>> 'UUID': 'd0d0c1c1-6625-4be3-ab1e-**d879c870d577'}, 'mtu':
>>>> '1500',
>>>> 'netmask': '255.255.255.0', 'stp': 'off', 'ports':
>>>> ['p10p1']}}
>>>> clusterLevels = ['3.0', '3.1', '3.2']
>>>> cpuCores = 4
>>>> cpuFlags =
>>>> fpu,vme,de,pse,tsc,msr,pae,**mce,cx8,apic,sep,mtrr,pge,mca,
>>>> **cmov,pat,pse36,clflush,mmx,**fxsr,sse,sse2,ht,syscall,nx,**
>>>> mmxext,fxsr_opt,pdpe1gb,**rdtscp,lm,3dnowext,3dnow,**
>>>> constant_tsc,rep_good,nopl,**nonstop_tsc,extd_apicid,pni,**
>>>> monitor,cx16,popcnt,lahf_lm,**cmp_legacy,svm,extapic,cr8_**
>>>> legacy,abm,sse4a,misalignsse,**3dnowprefetch,osvw,ibs,skinit,**
>>>> wdt,hw_pstate,npt,lbrv,svm_**lock,nrip_save,model_athlon,**
>>>> model_Opteron_G3,model_**Opteron_G1,model_phenom,model_**Opteron_G2
>>>> cpuModel = AMD Phenom(tm) II X4 965 Processor
>>>> cpuSockets = 1
>>>> cpuSpeed = 3400.000
>>>> emulatedMachines = ['pc-1.3', 'none', 'pc',
>>>> 'pc-1.2', 'pc-1.1',
>>>> 'pc-1.0', 'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12',
>>>> 'pc-0.11',
>>>> 'pc-0.10', 'isapc', 'q35', 'q35-next', 'pc-1.3', 'none',
>>>> 'pc', 'pc-1.2',
>>>> 'pc-1.1', 'pc-1.0', 'pc-0.15', 'pc-0.14', 'pc-0.13',
>>>> 'pc-0.12',
>>>> 'pc-0.11', 'pc-0.10', 'isapc', 'q35', 'q35-next']
>>>> guestOverhead = 65
>>>> hooks = {'before_vm_start': {'50_floppy': {'md5':
>>>> '**882a5b1a0d303102765d07c7ff6ae0**fc'}, '50_hostusb':
>>>> {'md5':
>>>> '**06110c7365ff3b71d40bfc9bad2adc**05'},
>>>> '50_isolatedprivatevlan': {'md5':
>>>> '**acc557aadb52442bd43bfa2f7a2b42**a1'}},
>>>> 'before_vm_migrate_source':
>>>> {'50_floppy': {'md5': '**388f180317bd8a121a512a8b26af6b**
>>>> f5'},
>>>> '50_hostusb':
>>>> {'md5': '**e50a0975acd8bafd0052b65a787c2f**dd'}},
>>>> 'after_vm_destroy':
>>>> {'50_hostusb': {'md5': '**cbf5ef4aed1e3cbe3ef9e30694ea66**
>>>> 07'}}}
>>>> kvmEnabled = true
>>>> lastClient = 0.0.0.0
>>>> lastClientIface = ppp0
>>>> management_ip =
>>>> memSize = 5968
>>>> netConfigDirty = False
>>>> networks = {'ovirtmgmt': {'iface': 'ovirtmgmt',
>>>> 'addr':
>>>> '10.20.20.20', 'cfg': {'VLAN': 'yes', 'IPV6INIT': 'no',
>>>> 'IPADDR0':
>>>> '10.20.20.20', 'DNS1': '8.8.8.8', 'PREFIX0': '24',
>>>> 'DEFROUTE': 'yes',
>>>> 'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0', 'NM_CONTROLLED':
>>>> 'no',
>>>> 'BOOTPROTO': 'none', 'GATEWAY0': '10.20.20.1', 'DEVICE':
>>>> 'ovirtmgmt',
>>>> 'TYPE': 'Bridge', 'ONBOOT': 'yes', 'UUID':
>>>> 'd0d0c1c1-6625-4be3-ab1e-**d879c870d577'}, 'mtu': '1500',
>>>> 'netmask':
>>>> '255.255.255.0', 'stp': 'off', 'bridged': True, 'gateway':
>>>> '0.0.0.0',
>>>> 'ports': ['p10p1']}}
>>>> nics = {'p10p1': {'addr': '', 'cfg': {'BRIDGE':
>>>> 'ovirtmgmt',
>>>> 'DEVICE': 'p10p1', 'IPV6INIT': 'no', 'IPADDR0':
>>>> '10.20.20.20', 'DNS1':
>>>> '8.8.8.8', 'PREFIX0': '24', 'DEFROUTE': 'yes',
>>>> 'IPV4_FAILURE_FATAL':
>>>> 'no', 'NM_CONTROLLED': 'no', 'GATEWAY0': '10.20.20.1',
>>>> 'HWADDR':
>>>> '00:1f:d0:9f:19:98', 'ONBOOT': 'yes', 'UUID':
>>>> 'd0d0c1c1-6625-4be3-ab1e-**d879c870d577'}, 'mtu': '1500',
>>>> 'netmask': '',
>>>> 'hwaddr': '00:1f:d0:9f:19:98', 'speed': 100}}
>>>> operatingSystem = {'release': '1', 'version':
>>>> '17', 'name':
>>>> 'Fedora'}
>>>> packages2 = {'kernel': {'release': '2.fc17.x86_64',
>>>> 'buildtime': 1354620364.0, 'version': '3.6.9'},
>>>> 'spice-server':
>>>> {'release': '1.fc17', 'buildtime': 1348891802, 'version':
>>>> '0.12.0'},
>>>> 'vdsm': {'release': '0.89.git922366a.fc17', 'buildtime':
>>>> 1354997600,
>>>> 'version': '4.10.2'}, 'qemu-kvm': {'release': '1.fc17',
>>>> 'buildtime':
>>>> 1355029612, 'version': '1.3.0'}, 'libvirt': {'release':
>>>> '1.fc17',
>>>> 'buildtime': 1352437629, 'version': '1.0.0'}, 'qemu-img':
>>>> {'release':
>>>> '1.fc17', 'buildtime': 1355029612, 'version': '1.3.0'},
>>>> 'mom':
>>>> {'release': '1.fc17', 'buildtime': 1354651873, 'version':
>>>> '0.3.0'}}
>>>> reservedMem = 321
>>>> software_revision = 0.89
>>>> software_version = 4.10
>>>> supportedENGINEs = ['3.0', '3.1']
>>>> supportedProtocols = ['2.2', '2.3']
>>>> uuid = 30303146-4430-3946-3139-**3938FFFFFFFF
>>>> version_name = Snow Man
>>>> vlans = {}
>>>> vmTypes = ['kvm']
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> Users mailing list
>>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>>>
>>>> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
>>>>
>>>>
>>>> according to vdsClient -s 0 GetVdsCaps vdsm does report
>>>> opteron G3.nx flag must be enabled (and it seems it does).
>>>> vdsm/ovirt doesn't check for aes flag.
>>>> waiting to hear if libvirt downgrade helped
>>>>
>>>>
>>>>
>>>> It's the same problem with libvirt-0.10.2 also.
>>>>
>>>>
>>>> I checked the flags from /proc/cpuinfo and the ones from vdsClient
>>>> -s 0 getVdsCaps.
>>>>
>>>> The only difference is that vdsClient has those extra flags:
>>>> model_Opteron_G1
>>>> model_Opteron_G2
>>>> model_Opteron_G3
>>>> model_athlon
>>>> model_phenom
>>>>
>>>> I don't know if they should appear in the cpuFlags section. From
>>>> other outputs on the internet, there was no model_*.
>>>>
>>> the output valid- a model_Opteron_G3 is saying your server can
>>> emulate G3. need the libvirt.log for more info and also worth
>>> checking the link ovedo previously pasted
>>>
>>>>
>>>>
>>>> Best regards,
>>>> Cristian Falcas
>>>>
>>>
>>> Hi,
>>>
>>> I didn't find any options in the bios to disable nx. But it worked a few
>>> versions earlier.
>>>
>>> i attached all the logs from vdsm+libvirt.
>>>
>>> Best regards,
>>> Cristian Falcas
>>>
>>>
>>>
>> just to be clear - you shouldn't disable nx in the bios.
>>
>>
>
> Hi,
>
> It's working now with the latest updates to vdsm nightly.
>
> But spice it's not working now (I can use vnc still)
>
> libvirt log:
>
> ((null):9248): Spice-Warning **: reds.c:3307:reds_init_ssl: Could not load
> certificates from /etc/pki/vdsm/libvirt-spice/server-cert.pem
> ((null):9248): Spice-Warning **: reds.c:3317:reds_init_ssl: Could not use
> private key file
> ((null):9248): Spice-Warning **: reds.c:3325:reds_init_ssl: Could not use
> CA file /etc/pki/vdsm/libvirt-spice/ca-cert.pem
>
> [root at localhost Ovirt]# ls -la /etc/pki/vdsm/libvirt-spice/server-cert.pem
> ls: cannot access /etc/pki/vdsm/libvirt-spice/server-cert.pem: No such
> file or directory
> [root at localhost Ovirt]# ls -la /etc/pki/vdsm/libvirt-spice/ca-cert.pem
> ls: cannot access /etc/pki/vdsm/libvirt-spice/ca-cert.pem: No such file or
> directory
>
>
> Spice log:
>
> 1355334879 INFO [8950:8950] Application::main: starting 0.12.0
> 1355334879 INFO [8950:8950] Application::main: command line: spicec
> --controller
> 1355334879 INFO [8950:8950] init_key_map: using evdev mapping
> 1355334879 INFO [8950:8950] MultyMonScreen::MultyMonScreen: platform_win:
> 77594625
> 1355334879 INFO [8950:8950] GUI::GUI:
> 1355334879 INFO [8950:8950] ForeignMenu::ForeignMenu: Creating a foreign
> menu connection /tmp/SpiceForeignMenu-8950.uds
> 1355334879 INFO [8950:8950] Controller::Controller: Creating a controller
> connection /tmp/spicec-9GS5mA/spice-xpi
> 1355334882 INFO [8950:8952] RedPeer::connect_secure: Connected to
> cristifalcas.no-ip.org 5902
> 1355334882 ERROR [8950:8952] RedPeer::connect_secure: failed to connect
> w/SSL, ssl_error error:00000001:lib(0):func(0):reason(1)
> 1355334882 WARN [8950:8952] RedChannel::run: SSL Error: error:14094410:SSL
> routines:SSL3_READ_BYTES:sslv3 alert handshake failure
> 1355334882 INFO [8950:8950] main: Spice client terminated (exitcode = 7)
>
>
>
>
> I've done this without an improvment:
>
> [root at localhost Ovirt]# /lib/systemd/systemd-vdsmd reconfigure
> Configuring libvirt for vdsm...
> [root at localhost Ovirt]# systemctl restart libvirtd.service vdsmd.service
>
>
> Best regards,
>
>
Hi,
i don't know if I should start a new thread for the spice problems. Here
goes some improvements:
I created the certificates like per https://gist.github.com/1655511 . i
copied the public one to my home:
cp /etc/pki/vdsm/libvirt-spice/ca-cert.pem
~cristi/.spice/spice_truststore.pem
I had the same problem as in
https://bugzilla.redhat.com/show_bug.cgi?id=880182 . For this I needed to
downgrade libcacard twice (until I had the same version as in the bug)
Now spice works with virt-manager.
Can someone tell me where do I need to copy the certificate on ovirt in
order to make spice working over there also?
Best regards,
Cristian Falcas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20121212/edfa8271/attachment-0001.html>
More information about the Users
mailing list