[Users] Unable to activate data center after host reboot.

Steven Wiltshire stevenwiltshire at gmail.com
Sat Apr 27 14:25:15 UTC 2013


Hi,

I am using NFS storage, although it is local.

Looking in to the NFS problems further, I read the NFS troubleshooting 
section of the oVirt wiki and made a few changes. nfs-check.py gives me 
the following:

------------------------------------------------------------------
Current hostname: bigserver - IP addr 10.0.0.154
Trying to /bin/mount -t nfs 10.0.0.154:/home/vm/data/data...
Executing NFS tests..
Removing vdsmTest file..
Status of tests [OK]
Disconnecting from NFS Server..
Done!
------------------------------------------------------------------

Success! Or so it seems, because when the machine boots there are no NFS 
shares mounted. When I attempt to activate the data centre in the oVirt 
web admin tool, the activation fails but an NFS share is mounted:

------------------------------------------------------------------
10.0.0.154:/home/vm/data/data on 
/rhev/data-center/mnt/10.0.0.154:_home_vm_data_data type nfs 
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.0.0.154,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.0.0.154)
------------------------------------------------------------------

Here is the engine.log associated with trying to activate the data centre:

------------------------------------------------------------------
2013-04-27 22:45:01,229 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) Lock Acquired to object EngineLock [exclusiveLocks= 
key: org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand val
ue: 025062dd-d69e-40ac-9031-c080b51647ae
, sharedLocks= ]
2013-04-27 22:45:01,231 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) Running command: ActivateStorageDomainCommand 
internal: false. Entities affected :  ID: 
025062dd-d69e-40ac-9031-c080b51647a
e Type: Storage
2013-04-27 22:45:01,243 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) Lock freed to object EngineLock [exclusiveLocks= key: 
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand value:
  025062dd-d69e-40ac-9031-c080b51647ae
, sharedLocks= ]
2013-04-27 22:45:01,244 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) ActivateStorage Domain. Before Connect all hosts to 
pool. Time:4/27/13 10:45 PM
2013-04-27 22:45:01,285 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] 
(pool-5-thread-50) START, 
ValidateStorageServerConnectionVDSCommand(vdsId = 
aeac7aa6-8840-11e1-99aa-7fbd3e56e26e, storage
PoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, 
connectionList = [{ id: 60e73855-daa2-45f5-9c59-5973ca89e830, 
connection: 10.0.0.154:/home/vm/data/data };]), log id: 27174693
2013-04-27 22:45:01,400 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand] 
(pool-5-thread-50) FINISH, ValidateStorageServerConnectionVDSCommand, 
return: {60e73855-daa2-45f5-9c59-5973ca89e830=0}, l
og id: 27174693
2013-04-27 22:45:01,402 INFO 
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
(pool-5-thread-50) Running command: ConnectStorageToVdsCommand internal: 
true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Typ
e: System
2013-04-27 22:45:01,412 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(pool-5-thread-50) START, ConnectStorageServerVDSCommand(vdsId = 
aeac7aa6-8840-11e1-99aa-7fbd3e56e26e, storagePoolId = 00000000-0000
-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: 
60e73855-daa2-45f5-9c59-5973ca89e830, connection: 
10.0.0.154:/home/vm/data/data };]), log id: 397edfdf
2013-04-27 22:45:01,447 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(pool-5-thread-50) FINISH, ConnectStorageServerVDSCommand, return: 
{60e73855-daa2-45f5-9c59-5973ca89e830=0}, log id: 397edfdf
2013-04-27 22:45:01,450 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) ActivateStorage Domain. After Connect all hosts to 
pool. Time:4/27/13 10:45 PM
2013-04-27 22:45:01,451 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] 
(pool-5-thread-44) START, ActivateStorageDomainVDSCommand(storagePoolId 
= 2b633f9a-8840-11e1-9ed7-17e6c491db84, ignoreFailoverLimit = false, 
compatabilityVersion = null, storageDomainId = 
025062dd-d69e-40ac-9031-c080b51647ae), log id: 4a58fee6
2013-04-27 22:45:30,061 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(QuartzScheduler_Worker-95) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand 
return value
  Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus                       Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode                         304
mMessage                      Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'


2013-04-27 22:45:30,062 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(QuartzScheduler_Worker-95) Vds: BigServer
2013-04-27 22:45:30,062 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] 
(QuartzScheduler_Worker-95) Command ConnectStoragePoolVDS execution 
failed. Exception: IRSNoMasterDomainException: IRSGenericException: 
IRSErrorException: IRSNoMasterDomainException: Cannot find master 
domain: 'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
2013-04-27 22:45:30,063 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(QuartzScheduler_Worker-95) FINISH, ConnectStoragePoolVDSCommand, log 
id: 1891eb24
2013-04-27 22:45:30,064 INFO 
[org.ovirt.engine.core.utils.transaction.TransactionSupport] 
(QuartzScheduler_Worker-95) transaction rolled back
2013-04-27 22:45:30,064 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-95) IrsBroker::Failed::GetStoragePoolInfoVDS
2013-04-27 22:45:30,065 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-95) Exception: IRSGenericException: 
IRSErrorException: IRSNoMasterDomainException: Cannot find master 
domain: 'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
2013-04-27 22:45:30,086 WARN 
[org.ovirt.engine.core.bll.storage.ReconstructMasterDomainCommand] 
(QuartzScheduler_Worker-95) CanDoAction of action 
ReconstructMasterDomain failed. 
Reasons:VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2
2013-04-27 22:45:30,120 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-5-thread-44) hostFromVds::selectedVds - BigServer, spmStatus 
Unknown_Pool, storage pool Default
2013-04-27 22:45:30,146 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(pool-5-thread-44) START, ConnectStoragePoolVDSCommand(vdsId = 
aeac7aa6-8840-11e1-99aa-7fbd3e56e26e, storagePoolId = 
2b633f9a-8840-11e1-9ed7-17e6c491db84, vds_spm_id = 1, masterDomainId = 
025062dd-d69e-40ac-9031-c080b51647ae, masterVersion = 1), log id: 460d5550
2013-04-27 22:45:43,303 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-7) starting ProcessDomainRecovery for domain 
503e6677-2d51-4668-ab17-f914bdb57c19
2013-04-27 22:45:43,327 WARN 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-7) domain 503e6677-2d51-4668-ab17-f914bdb57c19 
was reported by all hosts in status UP as problematic. Moving the Domain 
to NonOperational.
2013-04-27 22:45:43,335 INFO 
[org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand] 
(QuartzScheduler_Worker-7) Running command: 
HandleFailedStorageDomainCommand internal: true. Entities affected :  
ID: 503e6677-2d51-4668-ab17-f914bdb57c19 Type: Storage
2013-04-27 22:45:43,344 WARN 
[org.ovirt.engine.core.bll.storage.DeactivateStorageDomainCommand] 
(QuartzScheduler_Worker-7) CanDoAction of action DeactivateStorageDomain 
failed. 
Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL
2013-04-27 22:45:43,349 ERROR 
[org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand] 
(QuartzScheduler_Worker-7) Transaction rolled-back for command: 
org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand.
2013-04-27 22:45:43,350 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(QuartzScheduler_Worker-7) No string for UNASSIGNED type. Use default Log
2013-04-27 22:45:43,351 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-8) starting ProcessDomainRecovery for domain 
d30604e3-3ca8-470a-8da4-18c9402f7504
2013-04-27 22:45:43,360 WARN 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-8) domain d30604e3-3ca8-470a-8da4-18c9402f7504 
was reported by all hosts in status UP as problematic. Moving the Domain 
to NonOperational.
2013-04-27 22:45:43,368 INFO 
[org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand] 
(QuartzScheduler_Worker-8) Running command: 
HandleFailedStorageDomainCommand internal: true. Entities affected :  
ID: d30604e3-3ca8-470a-8da4-18c9402f7504 Type: Storage
2013-04-27 22:45:43,376 WARN 
[org.ovirt.engine.core.bll.storage.DeactivateStorageDomainCommand] 
(QuartzScheduler_Worker-8) CanDoAction of action DeactivateStorageDomain 
failed. 
Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL
2013-04-27 22:45:43,381 ERROR 
[org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand] 
(QuartzScheduler_Worker-8) Transaction rolled-back for command: 
org.ovirt.engine.core.bll.storage.HandleFailedStorageDomainCommand.
2013-04-27 22:45:43,381 INFO 
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(QuartzScheduler_Worker-8) No string for UNASSIGNED type. Use default Log
2013-04-27 22:45:48,445 WARN 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-23) domain 503e6677-2d51-4668-ab17-f914bdb57c19 
in problem. vds: BigServer
2013-04-27 22:46:34,040 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-5-thread-44) Command 
org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand 
return value
  Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus                       Class Name: 
org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode                         304
mMessage                      Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'


2013-04-27 22:46:34,040 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] 
(pool-5-thread-44) Vds: BigServer
2013-04-27 22:46:34,040 ERROR 
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-5-thread-44) 
Command ConnectStoragePoolVDS execution failed. Exception: 
IRSNoMasterDomainException: IRSGenericException: IRSErrorException: 
IRSNoMasterDomainException: Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
2013-04-27 22:46:34,041 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(pool-5-thread-44) FINISH, ConnectStoragePoolVDSCommand, log id: 460d5550
2013-04-27 22:46:34,042 INFO 
[org.ovirt.engine.core.utils.transaction.TransactionSupport] 
(pool-5-thread-44) transaction rolled back
2013-04-27 22:46:34,042 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-5-thread-44) IrsBroker::Failed::ActivateStorageDomainVDS
2013-04-27 22:46:34,043 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(pool-5-thread-44) Exception: IRSGenericException: IRSErrorException: 
IRSNoMasterDomainException: Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
2013-04-27 22:46:34,058 WARN 
[org.ovirt.engine.core.bll.storage.ReconstructMasterDomainCommand] 
(pool-5-thread-44) CanDoAction of action ReconstructMasterDomain failed. 
Reasons:VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2
2013-04-27 22:46:34,059 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] 
(pool-5-thread-44) FINISH, ActivateStorageDomainVDSCommand, log id: 
4a58fee6
2013-04-27 22:46:34,059 ERROR 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) Command 
org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc 
Bll exception. With error message VdcBLLException: 
org.ovirt.engine.core.vdsbroker.irsbroker.IRSNoMasterDomainException: 
IRSGenericException: IRSErrorException: IRSNoMasterDomainException: 
Cannot find master domain: 'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
2013-04-27 22:46:34,067 INFO 
[org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] 
(pool-5-thread-44) Command [id=7fe48e9f-b3c3-455e-a869-7cde583084cf]: 
Compensating CHANGED_STATUS_ONLY of 
org.ovirt.engine.core.common.businessentities.storage_pool_iso_map; 
snapshot: EntityStatusSnapshot [id=storagePoolId = 
2b633f9a-8840-11e1-9ed7-17e6c491db84, storageId = 
025062dd-d69e-40ac-9031-c080b51647ae, status=InActive].
2013-04-27 22:46:34,094 INFO 
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] 
(QuartzScheduler_Worker-93) hostFromVds::selectedVds - BigServer, 
spmStatus Unknown_Pool, storage pool Default
2013-04-27 22:46:34,106 INFO 
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] 
(QuartzScheduler_Worker-93) START, ConnectStoragePoolVDSCommand(vdsId = 
aeac7aa6-8840-11e1-99aa-7fbd3e56e26e, storagePoolId = 
2b633f9a-8840-11e1-9ed7-17e6c491db84, vds_spm_id = 1, masterDomainId = 
025062dd-d69e-40ac-9031-c080b51647ae, masterVersion = 1), log id: 9632afe
------------------------------------------------------------------

Here is vdsm.log for the same time period:

------------------------------------------------------------------
Thread-309::DEBUG::2013-04-27 
22:45:01,297::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-309::DEBUG::2013-04-27 
22:45:01,298::task::588::TaskManager.Task::(_updateState) 
Task=`da5d1ac1-8b0e-4a9a-bd28-24c30125654a`::moving from state init -> 
state preparing
Thread-309::INFO::2013-04-27 
22:45:01,299::logUtils::37::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection(domType=1, 
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 
'10.0.0.154:/home/vm/data/d
ata', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': 
'60e73855-daa2-45f5-9c59-5973ca89e830', 'port': ''}], options=None)
Thread-309::INFO::2013-04-27 
22:45:01,300::storage_connection::188::Storage.ServerConnection::(validate) 
Request to validate NFS storage server
Thread-309::DEBUG::2013-04-27 
22:45:01,301::mount::111::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo 
-n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6 
10.0.0.154:/home/vm/data/data /tmp/tmpXShfR0' (cwd None)
Thread-309::DEBUG::2013-04-27 
22:45:01,348::mount::111::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo 
-n /bin/umount /tmp/tmpXShfR0' (cwd None)
Thread-309::INFO::2013-04-27 
22:45:01,394::logUtils::39::dispatcher::(wrapper) Run and protect: 
validateStorageServerConnection, Return response: {'statuslist': 
[{'status': 0, 'id': '60e73855-daa2-45f5-9c59-5973ca89e830'}]}
Thread-309::DEBUG::2013-04-27 
22:45:01,395::task::1174::TaskManager.Task::(prepare) 
Task=`da5d1ac1-8b0e-4a9a-bd28-24c30125654a`::finished: {'statuslist': 
[{'status': 0, 'id': '60e73855-daa2-45f5-9c59-5973ca89e830'}]}
Thread-309::DEBUG::2013-04-27 
22:45:01,396::task::588::TaskManager.Task::(_updateState) 
Task=`da5d1ac1-8b0e-4a9a-bd28-24c30125654a`::moving from state preparing 
-> state finished
Thread-309::DEBUG::2013-04-27 
22:45:01,397::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-309::DEBUG::2013-04-27 
22:45:01,397::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-309::DEBUG::2013-04-27 
22:45:01,398::task::980::TaskManager.Task::(_decref) 
Task=`da5d1ac1-8b0e-4a9a-bd28-24c30125654a`::ref 0 aborting False
Thread-310::DEBUG::2013-04-27 
22:45:01,424::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-310::DEBUG::2013-04-27 
22:45:01,425::task::588::TaskManager.Task::(_updateState) 
Task=`6e1d398a-02dc-423d-b500-6f7f8ccb20e7`::moving from state init -> 
state preparing
Thread-310::INFO::2013-04-27 
22:45:01,425::logUtils::37::dispatcher::(wrapper) Run and protect: 
connectStorageServer(domType=1, 
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': 
'10.0.0.154:/home/vm/data/data', 'iqn'
: '', 'portal': '', 'user': '', 'password': '******', 'id': 
'60e73855-daa2-45f5-9c59-5973ca89e830', 'port': ''}], options=None)
Thread-310::INFO::2013-04-27 
22:45:01,426::storage_connection::146::Storage.ServerConnection::(connect) 
Request to connect NFS storage server
Thread-310::DEBUG::2013-04-27 
22:45:01,438::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-310::DEBUG::2013-04-27 
22:45:01,438::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-310::DEBUG::2013-04-27 
22:45:01,439::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-310::DEBUG::2013-04-27 
22:45:01,440::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-310::DEBUG::2013-04-27 
22:45:01,440::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-310::DEBUG::2013-04-27 
22:45:01,441::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-310::INFO::2013-04-27 
22:45:01,442::logUtils::39::dispatcher::(wrapper) Run and protect: 
connectStorageServer, Return response: {'statuslist': [{'status': 0, 
'id': '60e73855-daa2-45f5-9c59-5973ca89e830'}]}
Thread-310::DEBUG::2013-04-27 
22:45:01,442::task::1174::TaskManager.Task::(prepare) 
Task=`6e1d398a-02dc-423d-b500-6f7f8ccb20e7`::finished: {'statuslist': 
[{'status': 0, 'id': '60e73855-daa2-45f5-9c59-5973ca89e830'}]}
Thread-310::DEBUG::2013-04-27 
22:45:01,443::task::588::TaskManager.Task::(_updateState) 
Task=`6e1d398a-02dc-423d-b500-6f7f8ccb20e7`::moving from state preparing 
-> state finished
Thread-310::DEBUG::2013-04-27 
22:45:01,443::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-310::DEBUG::2013-04-27 
22:45:01,444::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-310::DEBUG::2013-04-27 
22:45:01,444::task::980::TaskManager.Task::(_decref) 
Task=`6e1d398a-02dc-423d-b500-6f7f8ccb20e7`::ref 0 aborting False
Thread-314::DEBUG::2013-04-27 
22:45:07,704::task::588::TaskManager.Task::(_updateState) 
Task=`f43264f4-0160-4737-8e1a-1322b35cfe3f`::moving from state init -> 
state preparing
Thread-314::INFO::2013-04-27 
22:45:07,705::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-314::INFO::2013-04-27 
22:45:07,705::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-314::DEBUG::2013-04-27 
22:45:07,706::task::1174::TaskManager.Task::(prepare) 
Task=`f43264f4-0160-4737-8e1a-1322b35cfe3f`::finished: {}
Thread-314::DEBUG::2013-04-27 
22:45:07,706::task::588::TaskManager.Task::(_updateState) 
Task=`f43264f4-0160-4737-8e1a-1322b35cfe3f`::moving from state preparing 
-> state finished
Thread-314::DEBUG::2013-04-27 
22:45:07,707::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-314::DEBUG::2013-04-27 
22:45:07,708::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-314::DEBUG::2013-04-27 
22:45:07,708::task::980::TaskManager.Task::(_decref) 
Task=`f43264f4-0160-4737-8e1a-1322b35cfe3f`::ref 0 aborting False
Thread-320::DEBUG::2013-04-27 
22:45:17,899::task::588::TaskManager.Task::(_updateState) 
Task=`b3c57791-d573-4dc5-99ef-9f17578a54d2`::moving from state init -> 
state preparing
Thread-320::INFO::2013-04-27 
22:45:17,900::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-320::INFO::2013-04-27 
22:45:17,901::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-320::DEBUG::2013-04-27 
22:45:17,901::task::1174::TaskManager.Task::(prepare) 
Task=`b3c57791-d573-4dc5-99ef-9f17578a54d2`::finished: {}
Thread-320::DEBUG::2013-04-27 
22:45:17,902::task::588::TaskManager.Task::(_updateState) 
Task=`b3c57791-d573-4dc5-99ef-9f17578a54d2`::moving from state preparing 
-> state finished
Thread-320::DEBUG::2013-04-27 
22:45:17,902::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-320::DEBUG::2013-04-27 
22:45:17,903::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-320::DEBUG::2013-04-27 
22:45:17,904::task::980::TaskManager.Task::(_decref) 
Task=`b3c57791-d573-4dc5-99ef-9f17578a54d2`::ref 0 aborting False
Thread-326::DEBUG::2013-04-27 
22:45:28,062::task::588::TaskManager.Task::(_updateState) 
Task=`4cec58af-afca-4003-a3d0-789ed881116f`::moving from state init -> 
state preparing
Thread-326::INFO::2013-04-27 
22:45:28,063::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-326::INFO::2013-04-27 
22:45:28,063::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-326::DEBUG::2013-04-27 
22:45:28,064::task::1174::TaskManager.Task::(prepare) 
Task=`4cec58af-afca-4003-a3d0-789ed881116f`::finished: {}
Thread-326::DEBUG::2013-04-27 
22:45:28,065::task::588::TaskManager.Task::(_updateState) 
Task=`4cec58af-afca-4003-a3d0-789ed881116f`::moving from state preparing 
-> state finished
Thread-326::DEBUG::2013-04-27 
22:45:28,065::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-326::DEBUG::2013-04-27 
22:45:28,066::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-326::DEBUG::2013-04-27 
22:45:28,066::task::980::TaskManager.Task::(_decref) 
Task=`4cec58af-afca-4003-a3d0-789ed881116f`::ref 0 aborting False
Thread-286::ERROR::2013-04-27 
22:45:30,046::sdc::113::Storage.StorageDomainCache::(_findDomain) Error 
while looking for domain `025062dd-d69e-40ac-9031-c080b51647ae`
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/sdc.py", line 109, in _findDomain
     return mod.findDomain(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 149, in findDomain
     return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/fileSD.py", line 104, in __init__
     sdUUID = metadata[sd.DMDK_SDUUID]
   File "/usr/share/vdsm/storage/persistentDict.py", line 75, in 
__getitem__
     return dec(self._dict[key])
   File "/usr/share/vdsm/storage/persistentDict.py", line 183, in 
__getitem__
     with self._accessWrapper():
   File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
     return self.gen.next()
   File "/usr/share/vdsm/storage/persistentDict.py", line 137, in 
_accessWrapper
     self.refresh()
   File "/usr/share/vdsm/storage/persistentDict.py", line 214, in refresh
     lines = self._metaRW.readlines()
   File "/usr/share/vdsm/storage/fileSD.py", line 71, in readlines
     return misc.stripNewLines(self._oop.directReadLines(self._metafile))
   File "/usr/share/vdsm/storage/processPool.py", line 53, in wrapper
     return self.runExternally(func, *args, **kwds)
   File "/usr/share/vdsm/storage/processPool.py", line 64, in runExternally
     return self._procPool.runExternally(*args, **kwargs)
   File "/usr/share/vdsm/storage/processPool.py", line 145, in 
runExternally
     raise Timeout("Operation Stuck")
Timeout: Operation Stuck
Thread-286::DEBUG::2013-04-27 
22:45:30,048::resourceManager::535::ResourceManager::(releaseResource) 
Trying to release resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84'
Thread-286::DEBUG::2013-04-27 
22:45:30,049::resourceManager::550::ResourceManager::(releaseResource) 
Released resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' (0 
active users)
Thread-286::DEBUG::2013-04-27 
22:45:30,049::resourceManager::555::ResourceManager::(releaseResource) 
Resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' is free, finding 
out if anyone is waiting for it.
Thread-286::DEBUG::2013-04-27 
22:45:30,050::resourceManager::562::ResourceManager::(releaseResource) 
No one is waiting for resource 
'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84', Clearing records.
Thread-286::ERROR::2013-04-27 
22:45:30,051::task::855::TaskManager.Task::(_setError) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 863, in _run
     return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
     res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 721, in connectStoragePool
     return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID, 
masterVersion, options)
   File "/usr/share/vdsm/storage/hsm.py", line 763, in _connectStoragePool
     res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 624, in connect
     self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1097, in __rebuild
     self.masterDomain = self.getMasterDomain(msdUUID=msdUUID, 
masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1435, in getMasterDomain
     raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
Thread-286::DEBUG::2013-04-27 
22:45:30,052::task::874::TaskManager.Task::(_run) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::Task._run: 
a7d4f658-136b-46f5-9458-f86f059498b1 
('2b633f9a-8840-11e1-9ed7-17e6c491db84', 1, 
'2b633f9a-8840-11e1-9ed7-17e6c491db84', 
'025062dd-d69e-40ac-9031-c080b51647ae', 1) {} failed - stopping task
Thread-286::DEBUG::2013-04-27 
22:45:30,052::task::1201::TaskManager.Task::(stop) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::stopping in state preparing 
(force False)
Thread-286::DEBUG::2013-04-27 
22:45:30,053::task::980::TaskManager.Task::(_decref) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::ref 1 aborting True
Thread-286::INFO::2013-04-27 
22:45:30,053::task::1159::TaskManager.Task::(prepare) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::aborting: Task is aborted: 
'Cannot find master domain' - code 304
Thread-286::DEBUG::2013-04-27 
22:45:30,054::task::1164::TaskManager.Task::(prepare) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::Prepare: aborted: Cannot 
find master domain
Thread-286::DEBUG::2013-04-27 
22:45:30,054::task::980::TaskManager.Task::(_decref) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::ref 0 aborting True
Thread-286::DEBUG::2013-04-27 
22:45:30,055::task::915::TaskManager.Task::(_doAbort) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::Task._doAbort: force False
Thread-286::DEBUG::2013-04-27 
22:45:30,055::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-286::DEBUG::2013-04-27 
22:45:30,056::task::588::TaskManager.Task::(_updateState) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::moving from state preparing 
-> state aborting
Thread-286::DEBUG::2013-04-27 
22:45:30,056::task::537::TaskManager.Task::(__state_aborting) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::_aborting: recover policy none
Thread-286::DEBUG::2013-04-27 
22:45:30,057::task::588::TaskManager.Task::(_updateState) 
Task=`a7d4f658-136b-46f5-9458-f86f059498b1`::moving from state aborting 
-> state failed
Thread-286::DEBUG::2013-04-27 
22:45:30,057::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-286::DEBUG::2013-04-27 
22:45:30,058::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-286::ERROR::2013-04-27 
22:45:30,059::dispatcher::90::Storage.Dispatcher.Protect::(run) 
{'status': {'message': "Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'", 'code': 304}}
Thread-329::DEBUG::2013-04-27 
22:45:30,107::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-329::DEBUG::2013-04-27 
22:45:30,108::task::588::TaskManager.Task::(_updateState) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::moving from state init -> 
state preparing
Thread-329::INFO::2013-04-27 
22:45:30,109::logUtils::37::dispatcher::(wrapper) Run and protect: 
getSpmStatus(spUUID='2b633f9a-8840-11e1-9ed7-17e6c491db84', options=None)
Thread-329::ERROR::2013-04-27 
22:45:30,110::task::855::TaskManager.Task::(_setError) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 863, in _run
     return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
     res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 438, in getSpmStatus
     pool = self.getPool(spUUID)
   File "/usr/share/vdsm/storage/hsm.py", line 186, in getPool
     raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',)
Thread-329::DEBUG::2013-04-27 
22:45:30,110::task::874::TaskManager.Task::(_run) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::Task._run: 
162609d1-b21e-49ac-b4e6-061283cfa375 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',) {} failed - stopping task
Thread-329::DEBUG::2013-04-27 
22:45:30,111::task::1201::TaskManager.Task::(stop) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::stopping in state preparing 
(force False)
Thread-329::DEBUG::2013-04-27 
22:45:30,111::task::980::TaskManager.Task::(_decref) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::ref 1 aborting True
Thread-329::INFO::2013-04-27 
22:45:30,112::task::1159::TaskManager.Task::(prepare) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::aborting: Task is aborted: 
'Unknown pool id, pool not connected' - code 309
Thread-329::DEBUG::2013-04-27 
22:45:30,112::task::1164::TaskManager.Task::(prepare) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::Prepare: aborted: Unknown 
pool id, pool not connected
Thread-329::DEBUG::2013-04-27 
22:45:30,113::task::980::TaskManager.Task::(_decref) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::ref 0 aborting True
Thread-329::DEBUG::2013-04-27 
22:45:30,114::task::915::TaskManager.Task::(_doAbort) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::Task._doAbort: force False
Thread-329::DEBUG::2013-04-27 
22:45:30,114::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-329::DEBUG::2013-04-27 
22:45:30,115::task::588::TaskManager.Task::(_updateState) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::moving from state preparing 
-> state aborting
Thread-329::DEBUG::2013-04-27 
22:45:30,115::task::537::TaskManager.Task::(__state_aborting) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::_aborting: recover policy none
Thread-329::DEBUG::2013-04-27 
22:45:30,116::task::588::TaskManager.Task::(_updateState) 
Task=`162609d1-b21e-49ac-b4e6-061283cfa375`::moving from state aborting 
-> state failed
Thread-329::DEBUG::2013-04-27 
22:45:30,116::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-329::DEBUG::2013-04-27 
22:45:30,117::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-329::ERROR::2013-04-27 
22:45:30,117::dispatcher::90::Storage.Dispatcher.Protect::(run) 
{'status': {'message': "Unknown pool id, pool not connected: 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',)", 'code': 309}}
Thread-331::DEBUG::2013-04-27 
22:45:30,159::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-331::DEBUG::2013-04-27 
22:45:30,160::task::588::TaskManager.Task::(_updateState) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::moving from state init -> 
state preparing
Thread-331::INFO::2013-04-27 
22:45:30,161::logUtils::37::dispatcher::(wrapper) Run and protect: 
connectStoragePool(spUUID='2b633f9a-8840-11e1-9ed7-17e6c491db84', 
hostID=1, scsiKey='2b633f9a-8840-11e1-9ed7-17e6c491db84', 
msdUUID='025062dd-d69e-40ac-9031-c080b51647ae', masterVersion=1, 
options=None)
Thread-331::DEBUG::2013-04-27 
22:45:30,162::resourceManager::175::ResourceManager.Request::(__init__) 
ResName=`Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84`ReqID=`f6f7913c-4cb2-4811-baa8-431b4e4eac91`::Request 
was made in '/usr/share/vdsm/storage/hsm.py' line '747' at 
'_connectStoragePool'
Thread-331::DEBUG::2013-04-27 
22:45:30,162::resourceManager::483::ResourceManager::(registerResource) 
Trying to register resource 
'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' for lock type 'exclusive'
Thread-331::DEBUG::2013-04-27 
22:45:30,163::resourceManager::525::ResourceManager::(registerResource) 
Resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' is free. Now 
locking as 'exclusive' (1 active user)
Thread-331::DEBUG::2013-04-27 
22:45:30,164::resourceManager::212::ResourceManager.Request::(grant) 
ResName=`Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84`ReqID=`f6f7913c-4cb2-4811-baa8-431b4e4eac91`::Granted 
request
Thread-331::INFO::2013-04-27 
22:45:30,165::sp::608::Storage.StoragePool::(connect) Connect host #1 to 
the storage pool 2b633f9a-8840-11e1-9ed7-17e6c491db84 with master 
domain: 025062dd-d69e-40ac-9031-c080b51647ae (ver = 1)
Thread-331::DEBUG::2013-04-27 
22:45:30,166::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,166::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,167::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,167::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,168::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,169::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:30,169::misc::1017::SamplingMethod::(__call__) Trying to enter 
sampling method (storage.sdc.refreshStorage)
Thread-331::DEBUG::2013-04-27 
22:45:30,170::misc::1019::SamplingMethod::(__call__) Got in to sampling 
method
Thread-331::DEBUG::2013-04-27 
22:45:30,170::misc::1017::SamplingMethod::(__call__) Trying to enter 
sampling method (storage.iscsi.rescan)
Thread-331::DEBUG::2013-04-27 
22:45:30,171::misc::1019::SamplingMethod::(__call__) Got in to sampling 
method
Thread-331::DEBUG::2013-04-27 
22:45:30,172::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo 
-n /sbin/iscsiadm -m session -R' (cwd None)
Thread-331::DEBUG::2013-04-27 
22:45:30,194::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: <err> = 
'iscsiadm: No session found.\n'; <rc> = 21
Thread-331::DEBUG::2013-04-27 
22:45:30,195::misc::1027::SamplingMethod::(__call__) Returning last result
Thread-331::DEBUG::2013-04-27 
22:45:31,778::multipath::71::Storage.Misc.excCmd::(rescan) 
'/usr/bin/sudo -n /sbin/multipath' (cwd None)
Thread-331::DEBUG::2013-04-27 
22:45:31,807::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: 
<err> = ''; <rc> = 0
Thread-331::DEBUG::2013-04-27 
22:45:31,808::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,808::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,809::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,810::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,810::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,811::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,811::misc::1027::SamplingMethod::(__call__) Returning last result
Thread-331::DEBUG::2013-04-27 
22:45:31,812::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' got the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,815::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n 
/sbin/lvm vgs --config " devices { preferred_names = 
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\\"a%36782bcb063f75600162a25c119f14819%\\", \\"r%.*%\\" ] } global {  
locking_type=1  prioritise_write_locks=1 wait_for_locks=1 } backup {  
retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix 
--separator | -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 
025062dd-d69e-40ac-9031-c080b51647ae' (cwd None)
Thread-331::DEBUG::2013-04-27 
22:45:31,934::lvm::287::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  
Volume group "025062dd-d69e-40ac-9031-c080b51647ae" not found\n'; <rc> = 5
Thread-331::WARNING::2013-04-27 
22:45:31,937::lvm::356::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] 
['  Volume group "025062dd-d69e-40ac-9031-c080b51647ae" not found']
Thread-331::DEBUG::2013-04-27 
22:45:31,938::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' released the operation mutex
Thread-331::DEBUG::2013-04-27 
22:45:31,958::fileSD::98::Storage.StorageDomain::(__init__) Reading 
domain in path 
/rhev/data-center/mnt/10.0.0.154:_home_vm_data_data/025062dd-d69e-40ac-9031-c080b51647ae
Thread-331::DEBUG::2013-04-27 
22:45:31,960::persistentDict::175::Storage.PersistentDict::(__init__) 
Created a persistant dict with FileMetadataRW backend
Thread-336::DEBUG::2013-04-27 
22:45:38,237::task::588::TaskManager.Task::(_updateState) 
Task=`1a7e26c3-d715-4cac-bce5-d0ddd21ab373`::moving from state init -> 
state preparing
Thread-336::INFO::2013-04-27 
22:45:38,238::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-336::INFO::2013-04-27 
22:45:38,238::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-336::DEBUG::2013-04-27 
22:45:38,239::task::1174::TaskManager.Task::(prepare) 
Task=`1a7e26c3-d715-4cac-bce5-d0ddd21ab373`::finished: {}
Thread-336::DEBUG::2013-04-27 
22:45:38,240::task::588::TaskManager.Task::(_updateState) 
Task=`1a7e26c3-d715-4cac-bce5-d0ddd21ab373`::moving from state preparing 
-> state finished
Thread-336::DEBUG::2013-04-27 
22:45:38,240::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-336::DEBUG::2013-04-27 
22:45:38,241::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-336::DEBUG::2013-04-27 
22:45:38,241::task::980::TaskManager.Task::(_decref) 
Task=`1a7e26c3-d715-4cac-bce5-d0ddd21ab373`::ref 0 aborting False
Thread-342::DEBUG::2013-04-27 
22:45:48,395::task::588::TaskManager.Task::(_updateState) 
Task=`cf4c41c0-e453-4493-9a68-b4044d0f0124`::moving from state init -> 
state preparing
Thread-342::INFO::2013-04-27 
22:45:48,396::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-342::INFO::2013-04-27 
22:45:48,396::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-342::DEBUG::2013-04-27 
22:45:48,397::task::1174::TaskManager.Task::(prepare) 
Task=`cf4c41c0-e453-4493-9a68-b4044d0f0124`::finished: {}
Thread-342::DEBUG::2013-04-27 
22:45:48,397::task::588::TaskManager.Task::(_updateState) 
Task=`cf4c41c0-e453-4493-9a68-b4044d0f0124`::moving from state preparing 
-> state finished
Thread-342::DEBUG::2013-04-27 
22:45:48,398::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-342::DEBUG::2013-04-27 
22:45:48,399::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-342::DEBUG::2013-04-27 
22:45:48,399::task::980::TaskManager.Task::(_decref) 
Task=`cf4c41c0-e453-4493-9a68-b4044d0f0124`::ref 0 aborting False
Thread-348::DEBUG::2013-04-27 
22:45:58,557::task::588::TaskManager.Task::(_updateState) 
Task=`b536de2b-6e78-43a9-b2b1-3667b51828df`::moving from state init -> 
state preparing
Thread-348::INFO::2013-04-27 
22:45:58,558::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-348::INFO::2013-04-27 
22:45:58,558::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-348::DEBUG::2013-04-27 
22:45:58,559::task::1174::TaskManager.Task::(prepare) 
Task=`b536de2b-6e78-43a9-b2b1-3667b51828df`::finished: {}
Thread-348::DEBUG::2013-04-27 
22:45:58,560::task::588::TaskManager.Task::(_updateState) 
Task=`b536de2b-6e78-43a9-b2b1-3667b51828df`::moving from state preparing 
-> state finished
Thread-348::DEBUG::2013-04-27 
22:45:58,560::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-348::DEBUG::2013-04-27 
22:45:58,561::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-348::DEBUG::2013-04-27 
22:45:58,561::task::980::TaskManager.Task::(_decref) 
Task=`b536de2b-6e78-43a9-b2b1-3667b51828df`::ref 0 aborting False
Thread-354::DEBUG::2013-04-27 
22:46:08,718::task::588::TaskManager.Task::(_updateState) 
Task=`a8e24fe3-5490-480c-a497-ee3f291698b0`::moving from state init -> 
state preparing
Thread-354::INFO::2013-04-27 
22:46:08,719::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-354::INFO::2013-04-27 
22:46:08,720::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-354::DEBUG::2013-04-27 
22:46:08,720::task::1174::TaskManager.Task::(prepare) 
Task=`a8e24fe3-5490-480c-a497-ee3f291698b0`::finished: {}
Thread-354::DEBUG::2013-04-27 
22:46:08,721::task::588::TaskManager.Task::(_updateState) 
Task=`a8e24fe3-5490-480c-a497-ee3f291698b0`::moving from state preparing 
-> state finished
Thread-354::DEBUG::2013-04-27 
22:46:08,721::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-354::DEBUG::2013-04-27 
22:46:08,722::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-354::DEBUG::2013-04-27 
22:46:08,722::task::980::TaskManager.Task::(_decref) 
Task=`a8e24fe3-5490-480c-a497-ee3f291698b0`::ref 0 aborting False
Thread-360::DEBUG::2013-04-27 
22:46:18,879::task::588::TaskManager.Task::(_updateState) 
Task=`9febbab9-7bae-4df5-9d16-5e89397ae101`::moving from state init -> 
state preparing
Thread-360::INFO::2013-04-27 
22:46:18,880::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-360::INFO::2013-04-27 
22:46:18,880::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-360::DEBUG::2013-04-27 
22:46:18,881::task::1174::TaskManager.Task::(prepare) 
Task=`9febbab9-7bae-4df5-9d16-5e89397ae101`::finished: {}
Thread-360::DEBUG::2013-04-27 
22:46:18,882::task::588::TaskManager.Task::(_updateState) 
Task=`9febbab9-7bae-4df5-9d16-5e89397ae101`::moving from state preparing 
-> state finished
Thread-360::DEBUG::2013-04-27 
22:46:18,882::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-360::DEBUG::2013-04-27 
22:46:18,883::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-360::DEBUG::2013-04-27 
22:46:18,883::task::980::TaskManager.Task::(_decref) 
Task=`9febbab9-7bae-4df5-9d16-5e89397ae101`::ref 0 aborting False
Thread-366::DEBUG::2013-04-27 
22:46:29,039::task::588::TaskManager.Task::(_updateState) 
Task=`f0468880-10a0-429b-8fce-547289a14fd1`::moving from state init -> 
state preparing
Thread-366::INFO::2013-04-27 
22:46:29,040::logUtils::37::dispatcher::(wrapper) Run and protect: 
repoStats(options=None)
Thread-366::INFO::2013-04-27 
22:46:29,041::logUtils::39::dispatcher::(wrapper) Run and protect: 
repoStats, Return response: {}
Thread-366::DEBUG::2013-04-27 
22:46:29,041::task::1174::TaskManager.Task::(prepare) 
Task=`f0468880-10a0-429b-8fce-547289a14fd1`::finished: {}
Thread-366::DEBUG::2013-04-27 
22:46:29,042::task::588::TaskManager.Task::(_updateState) 
Task=`f0468880-10a0-429b-8fce-547289a14fd1`::moving from state preparing 
-> state finished
Thread-366::DEBUG::2013-04-27 
22:46:29,042::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-366::DEBUG::2013-04-27 
22:46:29,043::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-366::DEBUG::2013-04-27 
22:46:29,043::task::980::TaskManager.Task::(_decref) 
Task=`f0468880-10a0-429b-8fce-547289a14fd1`::ref 0 aborting False
Thread-331::ERROR::2013-04-27 
22:46:34,024::sdc::113::Storage.StorageDomainCache::(_findDomain) Error 
while looking for domain `025062dd-d69e-40ac-9031-c080b51647ae`
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/sdc.py", line 109, in _findDomain
     return mod.findDomain(sdUUID)
   File "/usr/share/vdsm/storage/nfsSD.py", line 149, in findDomain
     return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
   File "/usr/share/vdsm/storage/fileSD.py", line 104, in __init__
     sdUUID = metadata[sd.DMDK_SDUUID]
   File "/usr/share/vdsm/storage/persistentDict.py", line 75, in 
__getitem__
     return dec(self._dict[key])
   File "/usr/share/vdsm/storage/persistentDict.py", line 183, in 
__getitem__
     with self._accessWrapper():
   File "/usr/lib64/python2.7/contextlib.py", line 17, in __enter__
     return self.gen.next()
   File "/usr/share/vdsm/storage/persistentDict.py", line 137, in 
_accessWrapper
     self.refresh()
   File "/usr/share/vdsm/storage/persistentDict.py", line 214, in refresh
     lines = self._metaRW.readlines()
   File "/usr/share/vdsm/storage/fileSD.py", line 71, in readlines
     return misc.stripNewLines(self._oop.directReadLines(self._metafile))
   File "/usr/share/vdsm/storage/processPool.py", line 53, in wrapper
     return self.runExternally(func, *args, **kwds)
   File "/usr/share/vdsm/storage/processPool.py", line 64, in runExternally
     return self._procPool.runExternally(*args, **kwargs)
   File "/usr/share/vdsm/storage/processPool.py", line 145, in 
runExternally
     raise Timeout("Operation Stuck")
Timeout: Operation Stuck
Thread-331::DEBUG::2013-04-27 
22:46:34,026::resourceManager::535::ResourceManager::(releaseResource) 
Trying to release resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84'
Thread-331::DEBUG::2013-04-27 
22:46:34,027::resourceManager::550::ResourceManager::(releaseResource) 
Released resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' (0 
active users)
Thread-331::DEBUG::2013-04-27 
22:46:34,028::resourceManager::555::ResourceManager::(releaseResource) 
Resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' is free, finding 
out if anyone is waiting for it.
Thread-331::DEBUG::2013-04-27 
22:46:34,028::resourceManager::562::ResourceManager::(releaseResource) 
No one is waiting for resource 
'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84', Clearing records.
Thread-331::ERROR::2013-04-27 
22:46:34,029::task::855::TaskManager.Task::(_setError) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 863, in _run
     return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
     res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 721, in connectStoragePool
     return self._connectStoragePool(spUUID, hostID, scsiKey, msdUUID, 
masterVersion, options)
   File "/usr/share/vdsm/storage/hsm.py", line 763, in _connectStoragePool
     res = pool.connect(hostID, scsiKey, msdUUID, masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 624, in connect
     self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1097, in __rebuild
     self.masterDomain = self.getMasterDomain(msdUUID=msdUUID, 
masterVersion=masterVersion)
   File "/usr/share/vdsm/storage/sp.py", line 1435, in getMasterDomain
     raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'
Thread-331::DEBUG::2013-04-27 
22:46:34,030::task::874::TaskManager.Task::(_run) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::Task._run: 
4a7f87fa-cbbc-41de-a457-bfe6a87e00a0 
('2b633f9a-8840-11e1-9ed7-17e6c491db84', 1, 
'2b633f9a-8840-11e1-9ed7-17e6c491db84', 
'025062dd-d69e-40ac-9031-c080b51647ae', 1) {} failed - stopping task
Thread-331::DEBUG::2013-04-27 
22:46:34,030::task::1201::TaskManager.Task::(stop) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::stopping in state preparing 
(force False)
Thread-331::DEBUG::2013-04-27 
22:46:34,031::task::980::TaskManager.Task::(_decref) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::ref 1 aborting True
Thread-331::INFO::2013-04-27 
22:46:34,032::task::1159::TaskManager.Task::(prepare) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::aborting: Task is aborted: 
'Cannot find master domain' - code 304
Thread-331::DEBUG::2013-04-27 
22:46:34,032::task::1164::TaskManager.Task::(prepare) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::Prepare: aborted: Cannot 
find master domain
Thread-331::DEBUG::2013-04-27 
22:46:34,033::task::980::TaskManager.Task::(_decref) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::ref 0 aborting True
Thread-331::DEBUG::2013-04-27 
22:46:34,033::task::915::TaskManager.Task::(_doAbort) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::Task._doAbort: force False
Thread-331::DEBUG::2013-04-27 
22:46:34,034::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-331::DEBUG::2013-04-27 
22:46:34,034::task::588::TaskManager.Task::(_updateState) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::moving from state preparing 
-> state aborting
Thread-331::DEBUG::2013-04-27 
22:46:34,035::task::537::TaskManager.Task::(__state_aborting) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::_aborting: recover policy none
Thread-331::DEBUG::2013-04-27 
22:46:34,035::task::588::TaskManager.Task::(_updateState) 
Task=`4a7f87fa-cbbc-41de-a457-bfe6a87e00a0`::moving from state aborting 
-> state failed
Thread-331::DEBUG::2013-04-27 
22:46:34,036::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-331::DEBUG::2013-04-27 
22:46:34,036::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-331::ERROR::2013-04-27 
22:46:34,037::dispatcher::90::Storage.Dispatcher.Protect::(run) 
{'status': {'message': "Cannot find master domain: 
'spUUID=2b633f9a-8840-11e1-9ed7-17e6c491db84, 
msdUUID=025062dd-d69e-40ac-9031-c080b51647ae'", 'code': 304}}
Thread-371::DEBUG::2013-04-27 
22:46:34,082::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-371::DEBUG::2013-04-27 
22:46:34,083::task::588::TaskManager.Task::(_updateState) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::moving from state init -> 
state preparing
Thread-371::INFO::2013-04-27 
22:46:34,084::logUtils::37::dispatcher::(wrapper) Run and protect: 
getSpmStatus(spUUID='2b633f9a-8840-11e1-9ed7-17e6c491db84', options=None)
Thread-371::ERROR::2013-04-27 
22:46:34,084::task::855::TaskManager.Task::(_setError) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::Unexpected error
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/task.py", line 863, in _run
     return fn(*args, **kargs)
   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
     res = f(*args, **kwargs)
   File "/usr/share/vdsm/storage/hsm.py", line 438, in getSpmStatus
     pool = self.getPool(spUUID)
   File "/usr/share/vdsm/storage/hsm.py", line 186, in getPool
     raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',)
Thread-371::DEBUG::2013-04-27 
22:46:34,085::task::874::TaskManager.Task::(_run) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::Task._run: 
e8ba0700-30eb-4467-b1d0-fbbd7636441b 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',) {} failed - stopping task
Thread-371::DEBUG::2013-04-27 
22:46:34,086::task::1201::TaskManager.Task::(stop) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::stopping in state preparing 
(force False)
Thread-371::DEBUG::2013-04-27 
22:46:34,086::task::980::TaskManager.Task::(_decref) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::ref 1 aborting True
Thread-371::INFO::2013-04-27 
22:46:34,087::task::1159::TaskManager.Task::(prepare) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::aborting: Task is aborted: 
'Unknown pool id, pool not connected' - code 309
Thread-371::DEBUG::2013-04-27 
22:46:34,087::task::1164::TaskManager.Task::(prepare) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::Prepare: aborted: Unknown 
pool id, pool not connected
Thread-371::DEBUG::2013-04-27 
22:46:34,088::task::980::TaskManager.Task::(_decref) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::ref 0 aborting True
Thread-371::DEBUG::2013-04-27 
22:46:34,088::task::915::TaskManager.Task::(_doAbort) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::Task._doAbort: force False
Thread-371::DEBUG::2013-04-27 
22:46:34,089::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-371::DEBUG::2013-04-27 
22:46:34,089::task::588::TaskManager.Task::(_updateState) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::moving from state preparing 
-> state aborting
Thread-371::DEBUG::2013-04-27 
22:46:34,090::task::537::TaskManager.Task::(__state_aborting) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::_aborting: recover policy none
Thread-371::DEBUG::2013-04-27 
22:46:34,090::task::588::TaskManager.Task::(_updateState) 
Task=`e8ba0700-30eb-4467-b1d0-fbbd7636441b`::moving from state aborting 
-> state failed
Thread-371::DEBUG::2013-04-27 
22:46:34,091::resourceManager::806::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-371::DEBUG::2013-04-27 
22:46:34,092::resourceManager::841::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-371::ERROR::2013-04-27 
22:46:34,092::dispatcher::90::Storage.Dispatcher.Protect::(run) 
{'status': {'message': "Unknown pool id, pool not connected: 
('2b633f9a-8840-11e1-9ed7-17e6c491db84',)", 'code': 309}}
Thread-372::DEBUG::2013-04-27 
22:46:34,116::clientIF::261::Storage.Dispatcher.Protect::(wrapper) 
[10.0.0.154]
Thread-372::DEBUG::2013-04-27 
22:46:34,117::task::588::TaskManager.Task::(_updateState) 
Task=`4a764a93-93ca-432b-ac1d-ca18c36f4b47`::moving from state init -> 
state preparing
Thread-372::INFO::2013-04-27 
22:46:34,118::logUtils::37::dispatcher::(wrapper) Run and protect: 
connectStoragePool(spUUID='2b633f9a-8840-11e1-9ed7-17e6c491db84', 
hostID=1, scsiKey='2b633f9a-8840-11e1-9ed7-17e6c491db84', 
msdUUID='025062dd-d69e-40ac-9031-c080b51647ae', masterVersion=1, 
options=None)
Thread-372::DEBUG::2013-04-27 
22:46:34,119::resourceManager::175::ResourceManager.Request::(__init__) 
ResName=`Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84`ReqID=`da6e89ed-9d5c-46d7-831a-53f401443358`::Request 
was made in '/usr/share/vdsm/storage/hsm.py' line '747' at 
'_connectStoragePool'
Thread-372::DEBUG::2013-04-27 
22:46:34,119::resourceManager::483::ResourceManager::(registerResource) 
Trying to register resource 
'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' for lock type 'exclusive'
Thread-372::DEBUG::2013-04-27 
22:46:34,120::resourceManager::525::ResourceManager::(registerResource) 
Resource 'Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84' is free. Now 
locking as 'exclusive' (1 active user)
Thread-372::DEBUG::2013-04-27 
22:46:34,120::resourceManager::212::ResourceManager.Request::(grant) 
ResName=`Storage.2b633f9a-8840-11e1-9ed7-17e6c491db84`ReqID=`da6e89ed-9d5c-46d7-831a-53f401443358`::Granted 
request
Thread-372::INFO::2013-04-27 
22:46:34,121::sp::608::Storage.StoragePool::(connect) Connect host #1 to 
the storage pool 2b633f9a-8840-11e1-9ed7-17e6c491db84 with master 
domain: 025062dd-d69e-40ac-9031-c080b51647ae (ver = 1)
Thread-372::DEBUG::2013-04-27 
22:46:34,122::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,123::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,124::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,124::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,125::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,126::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:34,126::misc::1017::SamplingMethod::(__call__) Trying to enter 
sampling method (storage.sdc.refreshStorage)
Thread-372::DEBUG::2013-04-27 
22:46:34,127::misc::1019::SamplingMethod::(__call__) Got in to sampling 
method
Thread-372::DEBUG::2013-04-27 
22:46:34,127::misc::1017::SamplingMethod::(__call__) Trying to enter 
sampling method (storage.iscsi.rescan)
Thread-372::DEBUG::2013-04-27 
22:46:34,128::misc::1019::SamplingMethod::(__call__) Got in to sampling 
method
Thread-372::DEBUG::2013-04-27 
22:46:34,128::iscsi::389::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo 
-n /sbin/iscsiadm -m session -R' (cwd None)
Thread-372::DEBUG::2013-04-27 
22:46:34,151::iscsi::389::Storage.Misc.excCmd::(rescan) FAILED: <err> = 
'iscsiadm: No session found.\n'; <rc> = 21
Thread-372::DEBUG::2013-04-27 
22:46:34,152::misc::1027::SamplingMethod::(__call__) Returning last result
Thread-372::DEBUG::2013-04-27 
22:46:35,735::multipath::71::Storage.Misc.excCmd::(rescan) 
'/usr/bin/sudo -n /sbin/multipath' (cwd None)
Thread-372::DEBUG::2013-04-27 
22:46:35,764::multipath::71::Storage.Misc.excCmd::(rescan) SUCCESS: 
<err> = ''; <rc> = 0
Thread-372::DEBUG::2013-04-27 
22:46:35,765::lvm::460::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,766::lvm::462::OperationMutex::(_invalidateAllPvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,766::lvm::472::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,767::lvm::474::OperationMutex::(_invalidateAllVgs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,768::lvm::493::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,768::lvm::495::OperationMutex::(_invalidateAllLvs) Operation 
'lvm invalidate operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,769::misc::1027::SamplingMethod::(__call__) Returning last result
Thread-372::DEBUG::2013-04-27 
22:46:35,770::lvm::352::OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' got the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,772::lvm::287::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n 
/sbin/lvm vgs --config " devices { preferred_names = 
[\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 
disable_after_error_count=3 filter = [ 
\\"a%36782bcb063f75600162a25c119f14819%\\", \\"r%.*%\\" ] } global {  
locking_type=1  prioritise_write_locks=1 wait_for_locks=1 } backup {  
retain_min = 50  retain_days = 0 } " --noheadings --units b --nosuffix 
--separator | -o 
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 
025062dd-d69e-40ac-9031-c080b51647ae' (cwd None)
Thread-372::DEBUG::2013-04-27 
22:46:35,874::lvm::287::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  
Volume group "025062dd-d69e-40ac-9031-c080b51647ae" not found\n'; <rc> = 5
Thread-372::WARNING::2013-04-27 
22:46:35,877::lvm::356::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] 
['  Volume group "025062dd-d69e-40ac-9031-c080b51647ae" not found']
Thread-372::DEBUG::2013-04-27 
22:46:35,878::lvm::379::OperationMutex::(_reloadvgs) Operation 'lvm 
reload operation' released the operation mutex
Thread-372::DEBUG::2013-04-27 
22:46:35,898::fileSD::98::Storage.StorageDomain::(__init__) Reading 
domain in path 
/rhev/data-center/mnt/10.0.0.154:_home_vm_data_data/025062dd-d69e-40ac-9031-c080b51647ae
------------------------------------------------------------------

Now, one interesting aspect I've noticed is that when I reboot the 
server, the mounted NFS shares change. As I mentioned earlier, in this 
instance there were no NFS shares mounted when the machine booted and 
none were mounted until I tried to activate the data centre. I 
subsequently rebooted the server after making some changes to 
nfsmount.conf, and upon booting the ISO share was mounted. Intrigued, I 
rebooted again and both the ISO and Export shares were mounted, but not 
the data share.

Do you - or does anybody - have any further insight?

Thanks for your help so far - looking in to the NFS side of things is 
giving some interesting results, and I'm sure that we're on the path to 
great success.

--Steven.



More information about the Users mailing list