[ovirt-users] numa error after upgrading from 3.5rc2 to 3.5rc3
Gianluca Cecchi
gianluca.cecchi at gmail.com
Wed Sep 24 20:05:29 EDT 2014
On Thu, Sep 25, 2014 at 1:58 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com>
wrote:
> Hello,
> after upgrading engine and hypervisor to rc3 I get this message when
> trying to start VM:
>
> VM vm_name is down with error. Exit message: internal error internal error
> NUMA memory tuning in 'preferred' mode only supports single node.
>
> hypervisor is Intel blade MFS5520VI with
>
> processor : 15
> vendor_id : GenuineIntel
> cpu family : 6
> model : 26
> model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
> stepping : 5
> cpu MHz : 2927.000
> cache size : 8192 KB
> physical id : 1
> siblings : 8
> core id : 3
> cpu cores : 4
> apicid : 23
> initial apicid : 23
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc
> aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca
> sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid
> bogomips : 5851.73
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management:
>
> Did anything change from rc2 to rc3 regarding this?
>
> Gianluca
>
If it can be of any help, both engine and node are CentOS 6.5 and on node I
have this output from numactl command:
[root at ovnode04 vdsm]# numactl --show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
cpubind: 0 1
nodebind: 0 1
membind: 0 1
[root at ovnode04 vdsm]# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14
node 0 size: 8128 MB
node 0 free: 7567 MB
node 1 cpus: 1 3 5 7 9 11 13 15
node 1 size: 8192 MB
node 1 free: 7747 MB
node distances:
node 0 1
0: 10 21
1: 21 10
In engine.log:
2014-09-25 01:58:18,050 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(org.ovirt.thread.pool-8-thread-16) [21714fa2]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand
kvmEnable=true,keyboardLayout=en-us,nice=0,pitReinjection=false,displayNetwork=ovirtmgmt,copyPasteEnable=true,timeOffset=7200,transparentHugePages=true,vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,acpiEnable=true,custom={device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39device_8e5365f1-1027-498e-9453-0289278434d4device_ef937c2a-c7ca-4405-98d8-72387b983ed7=VmDevice
{vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,
deviceId=ef937c2a-c7ca-4405-98d8-72387b983ed7, device=spicevmc,
type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0,
type=virtio-serial, port=3}, managed=false, plugged=true, readOnly=false,
deviceAlias=channel2, customProperties={}, snapshotId=null},
device_fc07dfb4-c33c-4302-915f-fcafcbaf201f=VmDevice
{vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,
deviceId=fc07dfb4-c33c-4302-915f-fcafcbaf201f, device=ide, type=CONTROLLER,
bootOrder=0, specParams={}, address={slot=0x01, bus=0x00, domain=0x0000,
type=pci, function=0x1}, managed=false, plugged=true, readOnly=false,
deviceAlias=ide0, customProperties={}, snapshotId=null},
device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39device_8e5365f1-1027-498e-9453-0289278434d4=VmDevice
{vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,
deviceId=8e5365f1-1027-498e-9453-0289278434d4, device=unix, type=CHANNEL,
bootOrder=0, specParams={}, address={bus=0, controller=0,
type=virtio-serial, port=2}, managed=false, plugged=true, readOnly=false,
deviceAlias=channel1, customProperties={}, snapshotId=null},
device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39=VmDevice
{vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,
deviceId=814694f2-e3d9-4808-87d1-1576acbb6b39, device=unix, type=CHANNEL,
bootOrder=0, specParams={}, address={bus=0, controller=0,
type=virtio-serial, port=1}, managed=false, plugged=true, readOnly=false,
deviceAlias=channel0, customProperties={},
snapshotId=null}},spiceSslCipherSuite=DEFAULT,memSize=2048,smp=2,emulatedMachine=rhel6.5.0,vmType=kvm,memGuaranteedSize=1365,display=qxl,smartcardEnable=false,bootMenuEnable=false,numaTune={mode=preferred,
nodeset=0,1},spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,smpCoresPerSocket=1,maxVCpus=16,devices=[{address={bus=0x00,
domain=0x0000, type=pci, slot=0x02, function=0x0}, specParams={ram=65536,
vram=32768, heads=1}, device=qxl, type=video,
deviceId=544e064e-f2df-40c8-89e3-9311cc82af45}, {shared=false, iface=ide,
index=2, address={unit=0, bus=1, target=0, controller=0, type=drive},
specParams={path=}, path=, device=cdrom, type=disk, readonly=true,
deviceId=00fd82da-3da6-45c1-b18b-54e3ccb52c9b}, {shared=false, index=0,
volumeID=72821a56-1806-4b8f-bd73-7b5ef4f3c12f, propagateErrors=off,
format=cow, type=disk, iface=scsi, bootOrder=1, address={unit=0, bus=0,
target=0, controller=0, type=drive},
domainID=346732c8-e1e9-49de-9ca4-63f4477ef6dd,
imageID=f9c73080-8653-44bd-9bd9-acc7a9925971, specParams={},
optional=false, device=disk, poolID=4512e567-f94e-476a-a050-6cd0a15e260a,
readonly=false, deviceId=f9c73080-8653-44bd-9bd9-acc7a9925971},
{nicModel=pv, address={bus=0x00, domain=0x0000, type=pci, slot=0x03,
function=0x0}, specParams={outbound={}, inbound={}},
macAddr=00:01:a4:a3:42:01, device=bridge, linkActive=true, type=interface,
filter=vdsm-no-mac-spoofing, network=ovirtmgmt,
deviceId=2d8e61a1-a759-4115-9c38-081d32bb9d3f}, {specParams={},
device=console, type=console,
deviceId=cf5798e1-f7f5-4c7e-b3e3-9903ad6b3b0a}, {address={bus=0x00,
domain=0x0000, type=pci, slot=0x06, function=0x0},
specParams={model=virtio}, device=memballoon, type=balloon,
deviceId=f9007dbf-da2b-41ce-b589-272af18ef860}, {index=0,
model=virtio-scsi, address={bus=0x00, domain=0x0000, type=pci, slot=0x04,
function=0x0}, specParams={}, device=scsi, type=controller,
deviceId=49fe629a-fa54-4eba-9e95-14768b4bec4c}, {address={bus=0x00,
domain=0x0000, type=pci, slot=0x05, function=0x0}, specParams={},
device=virtio-serial, type=controller,
deviceId=3bf8fbed-9a33-4e25-b07e-356a54cacc86}],vmName=c65new,cpuType=Nehalem,fileTransferEnable=true
2014-09-25 01:58:18,093 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand]
(org.ovirt.thread.pool-8-thread-16) [21714fa2] FINISH, CreateVDSCommand,
log id: 8f7a8dd
2014-09-25 01:58:18,099 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand]
(org.ovirt.thread.pool-8-thread-16) [21714fa2] FINISH, CreateVmVDSCommand,
return: WaitForLaunch, log id: 78789fc9
2014-09-25 01:58:18,100 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(org.ovirt.thread.pool-8-thread-16) [21714fa2] Lock freed to object
EngineLock [exclusiveLocks= key: 0ce8ebc0-8464-4e9a-b382-1836234b3560
value: VM
, sharedLocks= ]
2014-09-25 01:58:18,112 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-16) [21714fa2] Correlation ID: 21714fa2,
Job ID: 8fcd2749-9b69-4cfb-8b13-f3a7fb752895, Call Stack: null, Custom
Event ID: -1, Message: VM c65new was started by admin (Host: ovnode04).
2014-09-25 01:58:19,756 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-1) START, DestroyVDSCommand(HostName =
ovnode04, HostId = 36fec87b-c21f-4157-ab2f-434b67c05cb9,
vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, force=false, secondsToWait=0,
gracefully=false, reason=), log id: 5be4f2f8
2014-09-25 01:58:20,060 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(DefaultQuartzScheduler_Worker-1) FINISH, DestroyVDSCommand, log id:
5be4f2f8
2014-09-25 01:58:20,071 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-1) Correlation ID: null, Call Stack: null,
Custom Event ID: -1, Message: VM c65new is down with error. Exit message:
internal error internal error NUMA memory tuning in 'preferred' mode only
supports single node.
2014-09-25 01:58:20,072 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-1) Running on vds during rerun failed vm:
null
2014-09-25 01:58:20,072 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-1) VM c65new
(0ce8ebc0-8464-4e9a-b382-1836234b3560) is running in db and not running in
VDS ovnode04
2014-09-25 01:58:20,097 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-1) Rerun vm
0ce8ebc0-8464-4e9a-b382-1836234b3560. Called from vds ovnode04
2014-09-25 01:58:20,107 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-17) Correlation ID: 21714fa2, Job ID:
8fcd2749-9b69-4cfb-8b13-f3a7fb752895, Call Stack: null, Custom Event ID:
-1, Message: Failed to run VM c65new on Host ovnode04.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140925/a0c18373/attachment.html>
More information about the Users
mailing list