Thanks a lot! Patch did the job, a few more disks were deleted successfully.
One last question - how do i remove stale records of disks in "illegal"
state?
Yuriy Demchenko
On 08/22/2013 01:25 PM, Eduardo Warszawski wrote:
----- Original Message -----
> you said that the disks are deleted although an exception is given but
> the engine is reporting the delete as failed.
> I think that I remember a bug reported and fixed on that as well but I
> can't seem to find it.
> Adding Ayal and Eduardo
>
The log issue already solved in v4.11.0~380.
commit ad916c79e2b0959dea20dd19a21b99bc702d65ca
Author: Eduardo Warszawski <ewarszaw(a)redhat.com>
Date: Mon Dec 17 14:32:51 2012 +0200
Fix typo in negative flow log in blockSD.rmDCImgDir().
Related to BZ#885489.
Change-Id: I951e582acc86e08d709da4249084015660fc4ea0
Signed-off-by: Eduardo <ewarszaw(a)redhat.com>
Reviewed-on:
http://gerrit.ovirt.org/10153
Reviewed-by: Yeela Kaplan <ykaplan(a)redhat.com>
Reviewed-by: Ayal Baron <abaron(a)redhat.com>
Tested-by: Dan Kenigsberg <danken(a)redhat.com>
diff --git a/vdsm/storage/blockSD.py b/vdsm/storage/blockSD.py
index b5447cd..22a434b 100644
--- a/vdsm/storage/blockSD.py
+++ b/vdsm/storage/blockSD.py
@@ -978,7 +978,7 @@ class BlockStorageDomain(sd.StorageDomain):
try:
os.rmdir(imgPath)
except OSError:
- self.log.warning("Can't rmdir %s. %s", imgPath,
exc_info=True)
+ self.log.warning("Can't rmdir %s", imgPath, exc_info=True)
else:
self.log.debug("removed image dir: %s", imgPath)
return imgPath
>
> On 08/22/2013 07:55 AM, Yuriy Demchenko wrote:
>> I've done some more tests - and it seems quota error is not related to
>> my issue: I tried to remove another disk and this time there were no
>> quota errors in engine.log
>> New logs in attach.
>>
>> What catches my eye in logs is this errors, but maybe that's not the
>> root of case:
>>> Thread-60725::DEBUG::2013-08-22
>>> 10:37:45,549::lvm::485::OperationMutex::(_invali datevgs) Operation
>>> 'lvm invalidate operation' released the operation mutex
>>> Thread-60725::WARNING::2013-08-22
>>> 10:37:45,549::blockSD::931::Storage.StorageDom ain::(rmDCVolLinks)
>>> Can't unlink /rhev/data-center/mnt/blockSD/d786e2d5-05ab-4da
>>>
6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da7937fbdfad/dfefc573-de85-40
>>> 85-8900-da271affe831. [Errno 2] No such file or directory:
>>> '/rhev/data-center/mn
>>>
t/blockSD/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da
>>> 7937fbdfad/dfefc573-de85-4085-8900-da271affe831'
>>> Thread-60725::WARNING::2013-08-22
>>> 10:37:45,549::blockSD::931::Storage.StorageDom ain::(rmDCVolLinks)
>>> Can't unlink /rhev/data-center/mnt/blockSD/d786e2d5-05ab-4da
>>>
6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da7937fbdfad/c6cd6d1d-b70f-43
>>> 5d-bdc7-713b445a2326. [Errno 2] No such file or directory:
>>> '/rhev/data-center/mn
>>>
t/blockSD/d786e2d5-05ab-4da6-95fc-1af791a3c113/images/5344ca63-302a-43de-9193-da
>>> 7937fbdfad/c6cd6d1d-b70f-435d-bdc7-713b445a2326'
>>> Thread-60725::DEBUG::2013-08-22
>>> 10:37:45,549::blockSD::934::Storage.StorageDomai n::(rmDCVolLinks)
>>> removed: []
>>> Thread-60725::ERROR::2013-08-22
>>> 10:37:45,549::task::833::TaskManager.Task::(_set Error)
>>> Task=`83867bdc-48cd-4ba0-b453-6f8abbace13e`::Unexpected error
>>> Traceback (most recent call last):
>>> File "/usr/share/vdsm/storage/task.py", line 840, in _run
>>> return fn(*args, **kargs)
>>> File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
>>> res = f(*args, **kwargs)
>>> File "/usr/share/vdsm/storage/hsm.py", line 1460, in
deleteImage
>>> dom.deleteImage(sdUUID, imgUUID, volsByImg)
>>> File "/usr/share/vdsm/storage/blockSD.py", line 957, in
deleteImage
>>> self.rmDCImgDir(imgUUID, volsImgs)
>>> File "/usr/share/vdsm/storage/blockSD.py", line 943, in
rmDCImgDir
>>> self.log.warning("Can't rmdir %s. %s", imgPath,
exc_info=True)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 1068, in
warning
>>> self._log(WARNING, msg, args, **kwargs)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in
_log
>>> self.handle(record)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in
handle
>>> self.callHandlers(record)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in
>>> callHandlers
>>> hdlr.handle(record)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 679, in
handle
>>> self.emit(record)
>>> File "/usr/lib64/python2.6/logging/handlers.py", line 780, in
emit
>>> msg = self.format(record)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 654, in
format
>>> return fmt.format(record)
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 436, in
format
>>> record.message = record.getMessage()
>>> File "/usr/lib64/python2.6/logging/__init__.py", line 306, in
>>> getMessage
>>> msg = msg % self.args
>>> TypeError: not enough arguments for format string
>>
>> Yuriy Demchenko
>>
>> On 08/22/2013 04:11 AM, Greg Padgett wrote:
>>> On 08/21/2013 04:10 PM, Dafna Ron wrote:
>>>> there is a is an exception in the log related to a quota calculation
>>>>
>>>> 2013-08-21 17:52:32,694 ERROR
>>>> [org.ovirt.engine.core.utils.timer.SchedulerUtilQu artzImpl]
>>>> (DefaultQuartzScheduler_Worker-7) failed to invoke sceduled method upd
>>>> ateQuotaCache: java.lang.reflect.InvocationTargetException
>>>> at sun.reflect.GeneratedMethodAccessor175.invoke(Unknown
>>>> Source)
>>>> [:1.7.0 _25]
>>>> at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
>>>> sorImpl.java:43) [rt.jar:1.7.0_25]
>>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>> [rt.jar:1.7.0_25]
>>>> at
>>>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:
>>>> 60) [scheduler.jar:]
>>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>>> [quartz.jar:]
>>>> at
>>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.j
>>>> ava:557) [quartz.jar:]
>>>> Caused by: org.springframework.jdbc.BadSqlGrammarException:
>>>> PreparedStatementCal lback; bad SQL grammar [select * from
>>>> calculateallstorageusage()]; nested excep tion is
>>>> org.postgresql.util.PSQLException: ERROR: column
>>>> "quota_limitation.quota
>>>> _id" must appear in the GROUP BY clause or be used in an aggregate
>>>> function
>>>> Where: PL/pgSQL function "calculateallstorageusage" line 3
at
>>>> RETURN QUERY
>>>>
>>>>
>>>> in any case this is a bug.
>>>> I'm adding Doron to this mail, perhaps this was reported in the past
>>>> and
>>>> already solved in later versions.
>>>> if not it should be reported and fixed.
>>>>
>>>> Dafna
>>>>
>>>>
>>> If I'm not mistaken, it looks like this bug:
>>>
>>>
https://bugzilla.redhat.com/show_bug.cgi?id=905891
>>>
>>> Greg
>>>
>>>>
>>>>
>>>> On 08/21/2013 05:26 PM, Yura Demchenko wrote:
>>>>> 21.08.2013 19:18, Dafna Ron пишет:
>>>>>> from the logs it appears to be a quota issue.
>>>>>> do you have quota enabled?
>>>>>>
>>>>> Yes, quota in "enforced" mode. But VMs/disks in question
belongs to
>>>>> "unlimited" quota (defined quota policy with no limits on
>>>>> storage/cpu/ram)
>>>>>
>>>>>> On 08/21/2013 03:20 PM, Yuriy Demchenko wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> I've recently encountered a problem with removing disks
from iscsi
>>>>>>> domain in my test lab - just cant remove any.
>>>>>>> Remove operation fails with message "User admin@internal
failed to
>>>>>>> initiate removing of disk pg-slave1_opt from domain
iscsi-store" and
>>>>>>> re-elections of SPM. After that - disk is marked as
"illegal" in
>>>>>>> ovirt
>>>>>>> webinterface, however, it is _in fact_ removed from storage
-
>>>>>>> lvdisplay doesn't show it and free space is updated
correctly.
>>>>>>> And this happens with just about every disk/vm i try to
remove.
>>>>>>>
>>>>>>> ovirt 3.2.2-el6
>>>>>>> centos 6.4
>>>>>>> vdsm-4.10.3-17.el6
>>>>>>> lvm2-2.02.98-9.el6
>>>>>>>
>>>>>>> Any tips? logs in attach
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users(a)ovirt.org
>>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>
>
> --
> Dafna Ron
>