Re: [ovirt-users] Don't start vm

Hi, My config: vdsm host - CentOS 7, oVirt 3.5
Could you please share from hypervisor the /var/log/vdsm/vdsm.log
too?
my /var/log/vdsm/vdsm.log
We need the full log - please attach here or open a bug and attach the full log.
Thread-283375::DEBUG::2014-12-06 21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
You are using jsonrpc - please check if switching to xmlrpc solve your issue.
Thread-283376::DEBUG::2014-12-06 21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0 Thread-283376::DEBUG::2014-12-06 21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded Thread-283376::DEBUG::2014-12-06 21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex Thread-283376::WARNING::2014-12-06 21:20:40,254::lvm::600::Storage.LVM::(getLv) lv: fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs vg: 9d53ecef-8bfc-470b-8867-836bfa7df137 response Thread-283376::ERROR::2014-12-06 21:20:40,254::task::866::Storage.TaskManager.Task::(_setError) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 873, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 3099, in getVolumeSize apparentsize = str(dom.getVSize(imgUUID, volUUID)) File "/usr/share/vdsm/storage/blockSD.py", line 622, in getVSize size = lvm.getLV(self.sdUUID, volUUID).size File "/usr/share/vdsm/storage/lvm.py", line 893, in getLV raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) LogicalVolumeDoesNotExistError: Logical volume does not exist: (u'9d53ecef-8bfc-470b-8867-836bfa7df137/ fb8466c9-0867-4e73-8362-2c95eea89a83',) Thread-283376::DEBUG::2014-12-06 21:20:40,255::task::885::Storage.TaskManager.Task::(_run) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run: cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd (u'9d53ecef-8bfc-470b-8867-836bfa7df137', u'00000002-0002-0002-0002-00000000010b', u'7deace0a-0c83-41c8-9122-84079ad949c2', u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed - stopping task Thread-283376::DEBUG::2014-12-06 21:20:40,255::task::1217::Storage.TaskManager.Task::(stop) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping in state
(force False) Thread-283376::DEBUG::2014-12-06 21:20:40,255::task::993::Storage.TaskManager.Task::(_decref) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1 aborting True Thread-283376::INFO::2014-12-06 21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting: Task is aborted: 'Logical volume does not exist' - code 610 Thread-283376::DEBUG::2014-12-06 21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare: aborted: Logical volume does not exist Thread-283376::DEBUG::2014-12-06 21:20:40,256::task::993::Storage.TaskManager.Task::(_decref) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 0 aborting True Thread-283376::DEBUG::2014-12-06 21:20:40,256::task::928::Storage.TaskManager.Task::(_doAbort) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._doAbort: force False Thread-283376::DEBUG::2014-12-06 21:20:40,256::resourceManager:: 977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-283376::DEBUG::2014-12-06 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state
Hi, I attach the file. Below log in the vdsm.log.62.xz The given nonexistent disk has probably appeared after template removal from which it has been created. BUT it was independent and before problems was not, after template removal! The disk exists, but at it has changed ID! Nir Soffer <nsoffer@redhat.com> написано 09.12.2014 15:07:51: preparing preparing ->
state aborting Thread-283376::DEBUG::2014-12-06 21:20:40,256::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::_aborting: recover policy none Thread-283376::DEBUG::2014-12-06 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state aborting -> state failed Thread-283376::DEBUG::2014-12-06 21:20:40,257::resourceManager:: 940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-283376::DEBUG::2014-12-06 21:20:40,257::resourceManager:: 977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-283376::ERROR::2014-12-06 21:20:40,257::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Logical volume does not exist: (u'9d53ecef-8bfc-470b-8867-836bfa7df137/ fb8466c9-0867-4e73-8362-2c95eea89a83',)", 'code': 610}} # My comment:" Realy, this is volume is present! I mount it in thevdsm host! But, mount /dev/block/253:20 , no present in /dev/9d53ecef-8bfc-470b-8867-836bfa7df137/"
Please share with us the output of:
lsblk multipath -ll pvscan --cache pvs vgs lvs
When a host is up.
Thanks, Nir
participants (1)
-
Roman Nikolayevich Drovalev