[ovirt-users] Don't start vm

Roman Nikolayevich Drovalev drovalev at kaluga-gov.ru
Sat Dec 6 18:28:08 UTC 2014


Hi,
My config:  vdsm host - CentOS 7, oVirt 3.5

> Could you please share from hypervisor the /var/log/vdsm/vdsm.log too?

my   /var/log/vdsm/vdsm.log

Thread-283375::DEBUG::2014-12-06 
21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send) Sending 
response
Thread-283376::DEBUG::2014-12-06 
21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' 
WARNING: lvmetad is running but disabled. Restart lvmetad before enabling 
it!\n'; <rc> = 0
Thread-283376::DEBUG::2014-12-06 
21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-283376::DEBUG::2014-12-06 
21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 
'lvm reload operation' released the operation mutex
Thread-283376::WARNING::2014-12-06 
21:20:40,254::lvm::600::Storage.LVM::(getLv) lv: 
fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs vg: 
9d53ecef-8bfc-470b-8867-836bfa7df137 response
Thread-283376::ERROR::2014-12-06 
21:20:40,254::task::866::Storage.TaskManager.Task::(_setError) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 873, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 3099, in getVolumeSize
    apparentsize = str(dom.getVSize(imgUUID, volUUID))
  File "/usr/share/vdsm/storage/blockSD.py", line 622, in getVSize
    size = lvm.getLV(self.sdUUID, volUUID).size
  File "/usr/share/vdsm/storage/lvm.py", line 893, in getLV
    raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName))
LogicalVolumeDoesNotExistError: Logical volume does not exist: 
(u'9d53ecef-8bfc-470b-8867-836bfa7df137/fb8466c9-0867-4e73-8362-2c95eea89a83',)
Thread-283376::DEBUG::2014-12-06 
21:20:40,255::task::885::Storage.TaskManager.Task::(_run) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run: 
cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd 
(u'9d53ecef-8bfc-470b-8867-836bfa7df137', 
u'00000002-0002-0002-0002-00000000010b', 
u'7deace0a-0c83-41c8-9122-84079ad949c2', 
u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed - stopping task
Thread-283376::DEBUG::2014-12-06 
21:20:40,255::task::1217::Storage.TaskManager.Task::(stop) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping in state preparing 
(force False)
Thread-283376::DEBUG::2014-12-06 
21:20:40,255::task::993::Storage.TaskManager.Task::(_decref) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1 aborting True
Thread-283376::INFO::2014-12-06 
21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting: Task is aborted: 
'Logical volume does not exist' - code 610
Thread-283376::DEBUG::2014-12-06 
21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare: aborted: Logical 
volume does not exist
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::task::993::Storage.TaskManager.Task::(_decref) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 0 aborting True
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::task::928::Storage.TaskManager.Task::(_doAbort) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._doAbort: force False
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state preparing 
-> state aborting
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::task::550::Storage.TaskManager.Task::(__state_aborting) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::_aborting: recover policy 
none
Thread-283376::DEBUG::2014-12-06 
21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) 
Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state aborting -> 
state failed
Thread-283376::DEBUG::2014-12-06 
21:20:40,257::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-283376::DEBUG::2014-12-06 
21:20:40,257::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-283376::ERROR::2014-12-06 
21:20:40,257::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': 
{'message': "Logical volume does not exist: 
(u'9d53ecef-8bfc-470b-8867-836bfa7df137/fb8466c9-0867-4e73-8362-2c95eea89a83',)", 
'code': 610}}
# My comment:"Realy, this is volume is present! I mount it in the vdsm 
host! But, mount /dev/block/253:20 , no present in 
/dev/9d53ecef-8bfc-470b-8867-836bfa7df137/"

Thread-283376::DEBUG::2014-12-06 
21:20:40,257::vm::2289::vm.Vm::(_startUnderlyingVm) 
vmId=`d1ccb04d-bda8-42a2-bab6-7def2f8b2a00`::_ongoingCreations released
Thread-283376::ERROR::2014-12-06 
21:20:40,257::vm::2326::vm.Vm::(_startUnderlyingVm) 
vmId=`d1ccb04d-bda8-42a2-bab6-7def2f8b2a00`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3301, in _run
    devices = self.buildConfDevices()
  File "/usr/share/vdsm/virt/vm.py", line 2063, in buildConfDevices
    self._normalizeVdsmImg(drv)
  File "/usr/share/vdsm/virt/vm.py", line 1986, in _normalizeVdsmImg
    drv['volumeID'])
StorageUnavailableError: ('Failed to get size for volume %s', 
u'fb8466c9-0867-4e73-8362-2c95eea89a83')
Thread-283376::DEBUG::2014-12-06 
21:20:40,260::vm::2838::vm.Vm::(setDownStatus) 
vmId=`d1ccb04d-bda8-42a2-bab6-7def2f8b2a00`::Changed state to Down: 
('Failed to get size for volume %s', 
u'fb8466c9-0867-4e73-8362-2c95eea89a83') (code=1)
JsonRpc (StompReactor)::DEBUG::2014-12-06 
21:20:41,089::stompReactor::98::Broker.StompAdapter::(handle_frame) 
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-06 
21:20:41,091::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) 
Waiting for request
Thread-283378::DEBUG::2014-12-06 
21:20:41,097::stompReactor::163::yajsonrpc.StompServer::(send) Sending 
response
JsonRpc (StompReactor)::DEBUG::2014-12-06 
21:20:41,101::stompReactor::98::Broker.StompAdapter::(handle_frame) 
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-06 
21:20:41,103::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) 
Waiting for request
Thread-283379::DEBUG::2014-12-06 
21:20:41,108::vm::486::vm.Vm::(_getUserCpuTuneInfo) 
vmId=`c66e3966-a190-4cb1-8677-3d49d29cedc9`::Domain Metadata is not set
Thread-283379::DEBUG::2014-12-06 
21:20:41,110::stompReactor::163::yajsonrpc.StompServer::(send) Sending 
response



Douglas Schilling Landgraf <dougsland at redhat.com> написано 06.12.2014 
03:02:33:

> От: Douglas Schilling Landgraf <dougsland at redhat.com>
> Кому: users at ovirt.org, 
> Копия: drovalev at kaluga-gov.ru, Dan Kenigsberg <danken at redhat.com>
> Дата: 05.12.2014 23:58
> Тема: Re: [ovirt-users] Don't start vm
> 
> On 12/05/2014 02:55 PM, Roman Nikolayevich Drovalev wrote:
> > Hi,
> > Please Help
> >
> > I normal stop my virtual mashine. But not start !
> >
> > in the logs
> >
> > 2014-12-05 09:38:06,437 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (DefaultQuartzScheduler_Worker-87) Correlation ID: null, Call Stack:
> > null, Custom Event ID: -1, Message: VM Cent is down with error. Exit
> > message: ('Failed to get size for volume %s',
> > u'fb8466c9-0867-4e73-8362-2c95eea89a83').
> > 2014-12-05 09:38:06,439 INFO
> >   [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-87) Running on vds during rerun failed
> > vm: null
> > 2014-12-05 09:38:06,447 INFO
> >   [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-87) VM Cent
> > (d1ccb04d-bda8-42a2-bab6-7def2f8b2a00) is running in db and not 
running
> > in VDS x3550m2down
> > 2014-12-05 09:38:06,475 ERROR
> > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
> > (DefaultQuartzScheduler_Worker-87) Rerun vm
> > d1ccb04d-bda8-42a2-bab6-7def2f8b2a00. Called from vds x3550m2down
> > 2014-12-05 09:38:06,482 WARN
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID:
> > 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event 
ID:
> > -1, Message: Failed to run VM Cent on Host x3550m2down
> > 2014-12-05 09:38:06,486 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> > (org.ovirt.thread.pool-8-thread-16) Lock Acquired to object EngineLock
> > [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM
> > , sharedLocks= ]
> > 2014-12-05 09:38:06,504 INFO
> >   [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> > (org.ovirt.thread.pool-8-thread-16) START,
> > IsVmDuringInitiatingVDSCommand( vmId =
> > d1ccb04d-bda8-42a2-bab6-7def2f8b2a00), log id: 2e257f81
> > 2014-12-05 09:38:06,505 INFO
> >   [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
> > (org.ovirt.thread.pool-8-thread-16) FINISH,
> > IsVmDuringInitiatingVDSCommand, return: false, log id: 2e257f81
> > 2014-12-05 09:38:06,509 WARN  [org.ovirt.engine.core.bll.RunVmCommand]
> > (org.ovirt.thread.pool-8-thread-16) CanDoAction of action RunVm 
failed.
> > 
> 
Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
> >
> > 2014-12-05 09:38:06,510 INFO  [org.ovirt.engine.core.bll.RunVmCommand]
> > (org.ovirt.thread.pool-8-thread-16) Lock freed to object EngineLock
> > [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM
> > , sharedLocks= ]
> > 2014-12-05 09:38:06,539 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID:
> > 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event 
ID:
> > -1, Message: Failed to run VM Cent (User: admin).
> > 2014-12-05 09:38:06,548 INFO
> >   [org.ovirt.engine.core.bll.ProcessDownVmCommand]
> > (org.ovirt.thread.pool-8-thread-27) [58fe3e35] Running command:
> > ProcessDownVmCommand internal: true.
> >
> > What me do?
> >
> 
> Hi Roman,
> 
> 
> Could you please share from hypervisor the /var/log/vdsm/vdsm.log too?
> 
> 
> -- 
> Cheers
> Douglas

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141206/4f0a703a/attachment-0001.html>


More information about the Users mailing list