<tt><font size=2>Nir Soffer <nsoffer@redhat.com> ΞΑΠΙΣΑΞΟ
11.12.2014 10:02:02:<br>
<br>
> > Hi,<br>
> > <br>
> > I attach the file. Below log in the vdsm.log.62.xz<br>
> > <br>
> > The given nonexistent disk has probably appeared after template
removal<br>
> > from which it has been created.<br>
> > BUT it was independent and before problems was not, after template<br>
> > removal!<br>
> > The disk exists, but at it has changed ID!<br>
> <br>
> I don't understand this description.<br>
> <br>
> Can you describe to steps to reproduce this issue?<br>
> <br>
> Guessing from your description:<br>
> 1. Create vm with x disks<br>
> 2. Create template<br>
> 3. Create vm from template<br>
> 4. Remove template<br>
> ?</font></tt>
<br>
<br><tt><font size=2>Yes.</font></tt>
<br><tt><font size=2>1. Create vm with x disks on the DS 3524 </font></tt><font size=1 face="Segoe UI">through</font><tt><font size=2>
FC (multipathd on vdsm)</font></tt>
<br><tt><font size=2>2. Create template</font></tt>
<br><tt><font size=2>3. Create vm (independent) from template</font></tt>
<br><tt><font size=2>4. Start vm and job in the vm<br>
5. Remove template</font></tt>
<br><tt><font size=2>6. Stop vm</font></tt>
<br><tt><font size=2>7. Don`t start vm with error</font></tt>
<br><tt><font size=2>8. seek it disk - #lsblk</font></tt>
<br><tt><font size=2>9. many command with block 253:20 (</font></tt><tt><font size=3>kpartx
-l /dev/</font></tt><font size=3> </font><tt><font size=3>..,kpartx -a
/dev</font></tt><font size=3> </font><tt><font size=3>..,lvm pvscan</font></tt><font size=3>
</font><tt><font size=3>, lvm vgchange -a y</font></tt><font size=3> </font><tt><font size=2>,
...)</font></tt>
<br><tt><font size=2>10. mount finded lvm in lvm volume and save
data</font></tt>
<br><tt><font size=2>12. reboot all vdsm host</font></tt>
<br><tt><font size=2>13. dont't find ID it disk! ID it disk changed!</font></tt>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br><tt><font size=2><br>
> <br>
> > <br>
> > Nir Soffer <nsoffer@redhat.com> ΞΑΠΙΣΑΞΟ 09.12.2014
15:07:51:<br>
> > <br>
> > > > <br>
> > > > Hi,<br>
> > > > My config: vdsm host - CentOS 7, oVirt 3.5<br>
> > > > <br>
> > > > > Could you please share from hypervisor the /var/log/vdsm/vdsm.log<br>
> > too?<br>
> > > > <br>
> > > > my /var/log/vdsm/vdsm.log<br>
> > > <br>
> > > We need the full log - please attach here or open a bug
and<br>
> > > attach the full log.<br>
> > > <br>
> > > > <br>
> > > > Thread-283375::DEBUG::2014-12-06<br>
> > > > 21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send)
Sending<br>
> > > > response<br>
> > > <br>
> > > You are using jsonrpc - please check if switching to xmlrpc
solve<br>
> > > your issue.<br>
> > > <br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd)
SUCCESS: <err> = '<br>
> > > > WARNING: lvmetad is running but disabled. Restart lvmetad
before<br>
> > enabling<br>
> > > > it!\n'; <rc> = 0<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs
reloaded<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs)
Operation<br>
> > 'lvm<br>
> > > > reload operation' released the operation mutex<br>
> > > > Thread-283376::WARNING::2014-12-06<br>
> > > > 21:20:40,254::lvm::600::Storage.LVM::(getLv) lv:<br>
> > > > fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs
vg:<br>
> > > > 9d53ecef-8bfc-470b-8867-836bfa7df137 response<br>
> > > > Thread-283376::ERROR::2014-12-06<br>
> > > > 21:20:40,254::task::866::Storage.TaskManager.Task::(_setError)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected
error<br>
> > > > Traceback (most recent call last):<br>
> > > > File "/usr/share/vdsm/storage/task.py", line
873, in _run<br>
> > > > return fn(*args, **kargs)<br>
> > > > File "/usr/share/vdsm/logUtils.py", line
45, in wrapper<br>
> > > > res = f(*args, **kwargs)<br>
> > > > File "/usr/share/vdsm/storage/hsm.py", line
3099, in getVolumeSize<br>
> > > > apparentsize = str(dom.getVSize(imgUUID, volUUID))<br>
> > > > File "/usr/share/vdsm/storage/blockSD.py",
line 622, in getVSize<br>
> > > > size = lvm.getLV(self.sdUUID, volUUID).size<br>
> > > > File "/usr/share/vdsm/storage/lvm.py", line
893, in getLV<br>
> > > > raise se.LogicalVolumeDoesNotExistError("%s/%s"
% (vgName, lvName))<br>
> > > > LogicalVolumeDoesNotExistError: Logical volume does
not exist:<br>
> > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137/<br>
> > > fb8466c9-0867-4e73-8362-2c95eea89a83',)<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,255::task::885::Storage.TaskManager.Task::(_run)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run:<br>
> > > > cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd<br>
> > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137',<br>
> > > > u'00000002-0002-0002-0002-00000000010b',<br>
> > > > u'7deace0a-0c83-41c8-9122-84079ad949c2',<br>
> > > > u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed
- stopping task<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,255::task::1217::Storage.TaskManager.Task::(stop)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping
in state<br>
> > preparing<br>
> > > > (force False)<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,255::task::993::Storage.TaskManager.Task::(_decref)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1
aborting True<br>
> > > > Thread-283376::INFO::2014-12-06<br>
> > > > 21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting:
Task is<br>
> > aborted:<br>
> > > > 'Logical volume does not exist' - code 610<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare:
aborted: Logical<br>
> > > > volume does not exist<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::task::993::Storage.TaskManager.Task::(_decref)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 0
aborting True<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::task::928::Storage.TaskManager.Task::(_doAbort)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._doAbort:
force<br>
> > False<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::resourceManager::<br>
> > > 977::Storage.ResourceManager.Owner::(cancelAll)<br>
> > > > Owner.cancelAll requests {}<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving
from state<br>
> > preparing -><br>
> > > > state aborting<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::task::550::Storage.TaskManager.Task::(__state_aborting)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::_aborting:
recover policy<br>
> > none<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState)<br>
> > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving
from state<br>
> > aborting -><br>
> > > > state failed<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,257::resourceManager::<br>
> > > 940::Storage.ResourceManager.Owner::(releaseAll)<br>
> > > > Owner.releaseAll requests {} resources {}<br>
> > > > Thread-283376::DEBUG::2014-12-06<br>
> > > > 21:20:40,257::resourceManager::<br>
> > > 977::Storage.ResourceManager.Owner::(cancelAll)<br>
> > > > Owner.cancelAll requests {}<br>
> > > > Thread-283376::ERROR::2014-12-06<br>
> > > > 21:20:40,257::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status':<br>
> > > > {'message': "Logical volume does not exist:<br>
> > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137/<br>
> > > fb8466c9-0867-4e73-8362-2c95eea89a83',)",<br>
> > > > 'code': 610}}<br>
> > > > # My comment:" Realy, this is volume is present!
I mount it in thevdsm<br>
> > host!<br>
> > > > But, mount /dev/block/253:20 , no present in<br>
> > > > /dev/9d53ecef-8bfc-470b-8867-836bfa7df137/"<br>
> > > <br>
> > > Please share with us the output of:<br>
> > > <br>
> > > lsblk<br>
> > > multipath -ll<br>
> > > pvscan --cache<br>
> > > pvs<br>
> > > vgs<br>
> > > lvs<br>
> > > <br>
> > > When a host is up.<br>
> > > <br>
> > > Thanks,<br>
> > > Nir<br>
> > <br>
> > <br>
</font></tt>