<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Sep 25, 2014 at 1:58 AM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello,<br>after upgrading engine and hypervisor to rc3 I get this message when trying to start VM:<br> <br><div>VM vm_name is down with error. Exit message: internal error internal error NUMA memory tuning in 'preferred' mode only supports single node.<br><br></div><div>hypervisor is Intel blade MFS5520VI with <br><br>processor : 15<br>vendor_id : GenuineIntel<br>cpu family : 6<br>model : 26<br>model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz<br>stepping : 5<br>cpu MHz : 2927.000<br>cache size : 8192 KB<br>physical id : 1<br>siblings : 8<br>core id : 3<br>cpu cores : 4<br>apicid : 23<br>initial apicid : 23<br>fpu : yes<br>fpu_exception : yes<br>cpuid level : 11<br>wp : yes<br>flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi flexpriority ept vpid<br>bogomips : 5851.73<br>clflush size : 64<br>cache_alignment : 64<br>address sizes : 40 bits physical, 48 bits virtual<br>power management:<br><br></div><div>Did anything change from rc2 to rc3 regarding this?<span class=""><font color="#888888"><br><br></font></span></div><span class=""><font color="#888888"><div>Gianluca<br></div></font></span></div>
</blockquote></div><br><br></div><div class="gmail_extra">If it can be of any help, both engine and node are CentOS 6.5 and on node I have this output from numactl command:<br><br>[root@ovnode04 vdsm]# numactl --show<br>policy: default<br>preferred node: current<br>physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <br>cpubind: 0 1 <br>nodebind: 0 1 <br>membind: 0 1 <br><br>[root@ovnode04 vdsm]# numactl --hardware<br>available: 2 nodes (0-1)<br>node 0 cpus: 0 2 4 6 8 10 12 14<br>node 0 size: 8128 MB<br>node 0 free: 7567 MB<br>node 1 cpus: 1 3 5 7 9 11 13 15<br>node 1 size: 8192 MB<br>node 1 free: 7747 MB<br>node distances:<br>node 0 1 <br> 0: 10 21 <br> 1: 21 10 <br><br><br></div><div class="gmail_extra">In engine.log:<br>2014-09-25 01:58:18,050 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-16) [21714fa2] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,keyboardLayout=en-us,nice=0,pitReinjection=false,displayNetwork=ovirtmgmt,copyPasteEnable=true,timeOffset=7200,transparentHugePages=true,vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560,acpiEnable=true,custom={device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39device_8e5365f1-1027-498e-9453-0289278434d4device_ef937c2a-c7ca-4405-98d8-72387b983ed7=VmDevice {vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, deviceId=ef937c2a-c7ca-4405-98d8-72387b983ed7, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, port=3}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}, snapshotId=null}, device_fc07dfb4-c33c-4302-915f-fcafcbaf201f=VmDevice {vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, deviceId=fc07dfb4-c33c-4302-915f-fcafcbaf201f, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0, customProperties={}, snapshotId=null}, device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39device_8e5365f1-1027-498e-9453-0289278434d4=VmDevice {vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, deviceId=8e5365f1-1027-498e-9453-0289278434d4, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, port=2}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1, customProperties={}, snapshotId=null}, device_fc07dfb4-c33c-4302-915f-fcafcbaf201fdevice_814694f2-e3d9-4808-87d1-1576acbb6b39=VmDevice {vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, deviceId=814694f2-e3d9-4808-87d1-1576acbb6b39, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, port=1}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}, snapshotId=null}},spiceSslCipherSuite=DEFAULT,memSize=2048,smp=2,emulatedMachine=rhel6.5.0,vmType=kvm,memGuaranteedSize=1365,display=qxl,smartcardEnable=false,bootMenuEnable=false,numaTune={mode=preferred, nodeset=0,1},spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,smpCoresPerSocket=1,maxVCpus=16,devices=[{address={bus=0x00, domain=0x0000, type=pci, slot=0x02, function=0x0}, specParams={ram=65536, vram=32768, heads=1}, device=qxl, type=video, deviceId=544e064e-f2df-40c8-89e3-9311cc82af45}, {shared=false, iface=ide, index=2, address={unit=0, bus=1, target=0, controller=0, type=drive}, specParams={path=}, path=, device=cdrom, type=disk, readonly=true, deviceId=00fd82da-3da6-45c1-b18b-54e3ccb52c9b}, {shared=false, index=0, volumeID=72821a56-1806-4b8f-bd73-7b5ef4f3c12f, propagateErrors=off, format=cow, type=disk, iface=scsi, bootOrder=1, address={unit=0, bus=0, target=0, controller=0, type=drive}, domainID=346732c8-e1e9-49de-9ca4-63f4477ef6dd, imageID=f9c73080-8653-44bd-9bd9-acc7a9925971, specParams={}, optional=false, device=disk, poolID=4512e567-f94e-476a-a050-6cd0a15e260a, readonly=false, deviceId=f9c73080-8653-44bd-9bd9-acc7a9925971}, {nicModel=pv, address={bus=0x00, domain=0x0000, type=pci, slot=0x03, function=0x0}, specParams={outbound={}, inbound={}}, macAddr=00:01:a4:a3:42:01, device=bridge, linkActive=true, type=interface, filter=vdsm-no-mac-spoofing, network=ovirtmgmt, deviceId=2d8e61a1-a759-4115-9c38-081d32bb9d3f}, {specParams={}, device=console, type=console, deviceId=cf5798e1-f7f5-4c7e-b3e3-9903ad6b3b0a}, {address={bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}, specParams={model=virtio}, device=memballoon, type=balloon, deviceId=f9007dbf-da2b-41ce-b589-272af18ef860}, {index=0, model=virtio-scsi, address={bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}, specParams={}, device=scsi, type=controller, deviceId=49fe629a-fa54-4eba-9e95-14768b4bec4c}, {address={bus=0x00, domain=0x0000, type=pci, slot=0x05, function=0x0}, specParams={}, device=virtio-serial, type=controller, deviceId=3bf8fbed-9a33-4e25-b07e-356a54cacc86}],vmName=c65new,cpuType=Nehalem,fileTransferEnable=true<br>2014-09-25 01:58:18,093 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-16) [21714fa2] FINISH, CreateVDSCommand, log id: 8f7a8dd<br>2014-09-25 01:58:18,099 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-16) [21714fa2] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 78789fc9<br>2014-09-25 01:58:18,100 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) [21714fa2] Lock freed to object EngineLock [exclusiveLocks= key: 0ce8ebc0-8464-4e9a-b382-1836234b3560 value: VM<br>, sharedLocks= ]<br>2014-09-25 01:58:18,112 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) [21714fa2] Correlation ID: 21714fa2, Job ID: 8fcd2749-9b69-4cfb-8b13-f3a7fb752895, Call Stack: null, Custom Event ID: -1, Message: VM c65new was started by admin (Host: ovnode04).<br>2014-09-25 01:58:19,756 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-1) START, DestroyVDSCommand(HostName = ovnode04, HostId = 36fec87b-c21f-4157-ab2f-434b67c05cb9, vmId=0ce8ebc0-8464-4e9a-b382-1836234b3560, force=false, secondsToWait=0, gracefully=false, reason=), log id: 5be4f2f8<br>2014-09-25 01:58:20,060 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler_Worker-1) FINISH, DestroyVDSCommand, log id: 5be4f2f8<br>2014-09-25 01:58:20,071 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-1) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM c65new is down with error. Exit message: internal error internal error NUMA memory tuning in 'preferred' mode only supports single node.<br>2014-09-25 01:58:20,072 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-1) Running on vds during rerun failed vm: null<br>2014-09-25 01:58:20,072 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-1) VM c65new (0ce8ebc0-8464-4e9a-b382-1836234b3560) is running in db and not running in VDS ovnode04<br>2014-09-25 01:58:20,097 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-1) Rerun vm 0ce8ebc0-8464-4e9a-b382-1836234b3560. Called from vds ovnode04<br>2014-09-25 01:58:20,107 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) Correlation ID: 21714fa2, Job ID: 8fcd2749-9b69-4cfb-8b13-f3a7fb752895, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM c65new on Host ovnode04.<br><br></div></div>