----- Original Message -----
| From: "Yuriy Demchenko" <demchenko.ya(a)gmail.com>
| To: "Greg Padgett" <gpadgett(a)redhat.com>
| Cc: users(a)ovirt.org
| Sent: Thursday, August 22, 2013 9:55:19 AM
| Subject: Re: [Users] cant remove disks from iscsi domain
|
| I've done some more tests - and it seems quota error is not related to
| my issue: I tried to remove another disk and this time there were no
| quota errors in engine.log
| New logs in attach.
|
| What catches my eye in logs is this errors, but maybe that's not the
| root of case:
| > Thread-60725::DEBUG::2013-08-22
| > 10:37:45,549::lvm::485::OperationMutex::(_invali datevgs) Operation
| > 'lvm invalidate operation' released the operation mutex
| > Thread-60725::WARNING::2013-08-22
| > 10:37:45,549::blockSD::931::Storage.StorageDom ain::(rmDCVolLinks)
| > Can't unlink /rhev/data-center/mnt/blockSD/d786e2d5-05ab-4da
| > 6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da7937fbdfad/dfefc573-de85-40
| > 85-8900-da271affe831. [Errno 2] No such file or directory:
| > '/rhev/data-center/mn
| > t/blockSD/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da
| > 7937fbdfad/dfefc573-de85-4085-8900-da271affe831'
| > Thread-60725::WARNING::2013-08-22
| > 10:37:45,549::blockSD::931::Storage.StorageDom ain::(rmDCVolLinks)
| > Can't unlink /rhev/data-center/mnt/blockSD/d786e2d5-05ab-4da
| > 6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da7937fbdfad/c6cd6d1d-b70f-43
| > 5d-bdc7-713b445a2326. [Errno 2] No such file or directory:
| > '/rhev/data-center/mn
| > t/blockSD/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da
| > 7937fbdfad/c6cd6d1d-b70f-435d-bdc7-713b445a2326'
| > Thread-60725::DEBUG::2013-08-22
| > 10:37:45,549::blockSD::934::Storage.StorageDomai n::(rmDCVolLinks)
| > removed: []
| > Thread-60725::ERROR::2013-08-22
| > 10:37:45,549::task::833::TaskManager.Task::(_set Error)
| > Task=`83867bdc-48cd-4ba0-b453-6f8abbace13e`::Unexpected error
| > Traceback (most recent call last):
| > File "/usr/share/vdsm/storage/task.py", line 840, in _run
| > return fn(*args, **kargs)
| > File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
| > res = f(*args, **kwargs)
| > File "/usr/share/vdsm/storage/hsm.py", line 1460, in deleteImage
| > dom.deleteImage(sdUUID, imgUUID, volsByImg)
| > File "/usr/share/vdsm/storage/blockSD.py", line 957, in deleteImage
| > self.rmDCImgDir(imgUUID, volsImgs)
| > File "/usr/share/vdsm/storage/blockSD.py", line 943, in rmDCImgDir
| > self.log.warning("Can't rmdir %s. %s", imgPath, exc_info=True)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 1068, in warning
| > self._log(WARNING, msg, args, **kwargs)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
| > self.handle(record)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
| > self.callHandlers(record)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in
| > callHandlers
| > hdlr.handle(record)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
| > self.emit(record)
| > File "/usr/lib64/python2.6/logging/handlers.py", line 780, in emit
| > msg = self.format(record)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
| > return fmt.format(record)
| > File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
| > record.message = record.getMessage()
| > File "/usr/lib64/python2.6/logging/__init__.py", line 306, in
getMessage
| > msg = msg % self.args
| > TypeError: not enough arguments for format string
|
|
| Yuriy Demchenko
Yuri,
just to clarify the quota part, the stack trace you provided
was resolved as the bz Greg described-
https://bugzilla.redhat.com/show_bug.cgi?id=905891
This is unrelated to the storage issue, as the quota caching
is an independent procedure running in parallel.
So what we now need to focus on is this part:
[Errno 2] No such file or directory:
'/rhev/data-center/mnt/blockSD/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da7937fbdfad/dfefc573-de85-4085-8900-da271affe831'
What we need to understand is how did we get into
this state; Did you have network issues or relevant crashes?