[ovirt-users] Unable to shutdown VM's

Michal Skrivanek michal.skrivanek at redhat.com
Fri Jun 13 08:08:05 UTC 2014


On 10 Jun 2014, at 11:31, Jorick Astrego wrote:

> On 06/05/2014 01:36 PM, Jorick Astrego wrote: 
>>> Hi, 
>>> 
>>> After upgrading to 3.4.2 rc I want to reinstall the nodes. But there are 
>>> some benchmark VM's that I'm unable to shutdown to put the nodes in 
>>> maintenance. How can I kill these VM's without having to kill the power 
>>> to the nodes manually? 
>>> 
>>> Kind regards, 
>>> Jorick Astrego 
>>> 
>>> Netbulae B.V. 
>>> 
>>> 
>>>      
>>> 2014-Jun-05, 12:24 
>>>      
>>> Failed to switch Host node1.test.now to Maintenance mode. 
>>>      
>>> 2014-Jun-05, 12:24 
>>>      
>>> Host node1.test.now cannot change into maintenance mode - not all Vms 
>>> have been migrated successfully. Consider manual intervention: 
>>> stopping/migrating Vms: Bench1 (User: admin). 
>>>      
>>> 2014-Jun-05, 12:05 
>>>      
>>> Shutdown of VM Bench2 failed. 
>>>      
>>> 2014-Jun-05, 12:05 
>>>      
>>> Shutdown of VM Bench4 failed. 
>>>      
>>> 2014-Jun-05, 12:05 
>>>      
>>> Shutdown of VM Bench3 failed. 
>>>      
>>> 2014-Jun-05, 12:05 
>>>      
>>> Shutdown of VM Bench1 failed. 
>>>      
>>> 2014-Jun-05, 12:00 
>>>      
>>> VM shutdown initiated by admin on VM Bench2 (Host: node2.test.now). 
>>>      
>>> 2014-Jun-05, 12:00 
>>>      
>>> VM shutdown initiated by admin on VM Bench4 (Host: node4.test.now). 
>>>      
>>> 2014-Jun-05, 12:00 
>>>      
>>> VM shutdown initiated by admin on VM Bench3 (Host: node3.test.now). 
>>>      
>>> 2014-Jun-05, 12:00 
>>>      
>>> VM shutdown initiated by admin on VM Bench1 (Host: node1.test.now). 
>>> 
>>> 
>>> 
>>> 
>>> 2014-06-05 12:00:33,127 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-8) [7c4b08f7] Start running CanDoAction 
>>> for command number 1/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,130 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-14) [2239cfe1] Start running CanDoAction 
>>> for command number 2/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,134 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-50) [17ece55] Start running CanDoAction 
>>> for command number 3/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,225 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-50) [17ece55] Finish handling 
>>> CanDoAction for command number 3/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,229 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-8) [7c4b08f7] Finish handling 
>>> CanDoAction for command number 1/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,234 INFO 
>>> [org.ovirt.engine.core.bll.MultipleActionsRunner] 
>>> (org.ovirt.thread.pool-6-thread-14) [2239cfe1] Finish handling 
>>> CanDoAction for command number 2/3 (Command type: ShutdownVm) 
>>> 2014-06-05 12:00:33,710 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] Running command: 
>>> ShutdownVmCommand internal: false. Entities affected :  ID: 
>>> 6709eaa1-163f-4dc5-9101-e46870438f38 Type: VM 
>>> 2014-06-05 12:00:33,728 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] Entered (VM Bench3). 
>>> 2014-06-05 12:00:33,728 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] Sending shutdown command 
>>> for VM Bench3. 
>>> 2014-06-05 12:00:33,764 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] START, 
>>> DestroyVmVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, 
>>> vmId=6709eaa1-163f-4dc5-9101-e46870438f38, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 5eddc358 
>>> 2014-06-05 12:00:33,790 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] START, 
>>> DestroyVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, 
>>> vmId=6709eaa1-163f-4dc5-9101-e46870438f38, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 19f6f7c8 
>>> 2014-06-05 12:00:33,838 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] FINISH, DestroyVDSCommand, 
>>> log id: 19f6f7c8 
>>> 2014-06-05 12:00:33,843 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] FINISH, 
>>> DestroyVmVDSCommand, return: PoweringDown, log id: 5eddc358 
>>> 2014-06-05 12:00:33,855 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (org.ovirt.thread.pool-6-thread-3) [7c4b08f7] Correlation ID: 7c4b08f7, 
>>> Job ID: 406c481e-102b-4488-9286-c9b38197ef36, Call Stack: null, Custom 
>>> Event ID: -1, Message: VM shutdown initiated by admin on VM Bench3 
>>> (Host: node3.test.now). 
>>> 2014-06-05 12:00:33,921 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] Running command: 
>>> ShutdownVmCommand internal: false. Entities affected :  ID: 
>>> 8b305f06-2c82-4a13-99f2-5beab5a056ea Type: VM 
>>> 2014-06-05 12:00:33,967 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] Entered (VM Bench4). 
>>> 2014-06-05 12:00:33,969 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] Sending shutdown command 
>>> for VM Bench4. 
>>> 2014-06-05 12:00:34,001 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] START, 
>>> DestroyVmVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, 
>>> vmId=8b305f06-2c82-4a13-99f2-5beab5a056ea, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 23611c0e 
>>> 2014-06-05 12:00:34,041 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] START, 
>>> DestroyVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, 
>>> vmId=8b305f06-2c82-4a13-99f2-5beab5a056ea, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 4f1663eb 
>>> 2014-06-05 12:00:34,048 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] FINISH, DestroyVDSCommand, 
>>> log id: 4f1663eb 
>>> 2014-06-05 12:00:34,257 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] FINISH, 
>>> DestroyVmVDSCommand, return: PoweringDown, log id: 23611c0e 
>>> 2014-06-05 12:00:34,455 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (org.ovirt.thread.pool-6-thread-3) [2239cfe1] Correlation ID: 2239cfe1, 
>>> Job ID: 00fb00a7-88b0-4fb1-b14a-fa099e3d6409, Call Stack: null, Custom 
>>> Event ID: -1, Message: VM shutdown initiated by admin on VM Bench4 
>>> (Host: node4.test.now). 
>>> 2014-06-05 12:00:34,677 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] Running command: 
>>> ShutdownVmCommand internal: false. Entities affected :  ID: 
>>> 764b60fd-c255-479b-9082-3f4f04b95cb2 Type: VM 
>>> 2014-06-05 12:00:34,695 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] Entered (VM Bench2). 
>>> 2014-06-05 12:00:34,696 INFO 
>>> [org.ovirt.engine.core.bll.ShutdownVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] Sending shutdown command 
>>> for VM Bench2. 
>>> 2014-06-05 12:00:34,718 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] START, 
>>> DestroyVmVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, 
>>> vmId=764b60fd-c255-479b-9082-3f4f04b95cb2, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 215e8d55 
>>> 2014-06-05 12:00:34,740 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] START, 
>>> DestroyVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, 
>>> vmId=764b60fd-c255-479b-9082-3f4f04b95cb2, force=false, 
>>> secondsToWait=30, gracefully=true), log id: 6a5599de 
>>> 2014-06-05 12:00:34,755 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] FINISH, DestroyVDSCommand, 
>>> log id: 6a5599de 
>>> 2014-06-05 12:00:34,777 INFO 
>>> [org.ovirt.engine.core.vdsbroker.DestroyVmVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] FINISH, 
>>> DestroyVmVDSCommand, return: PoweringDown, log id: 215e8d55 
>>> 2014-06-05 12:00:34,833 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (org.ovirt.thread.pool-6-thread-3) [17ece55] Correlation ID: 17ece55, 
>>> Job ID: d92b0c9a-fb0f-4ca6-bc51-8d363aaf378c, Call Stack: null, Custom 
>>> Event ID: -1, Message: VM shutdown initiated by admin on VM Bench2 
>>> (Host: node2.test.now). 
>>> 2014-06-05 12:01:00,648 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-45) [22ac5fb6] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7045fa3b 
>>> 2014-06-05 12:01:00,650 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-48) [4fbc37fa] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 306b54b0 
>>> 2014-06-05 12:01:00,657 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-25) [760d647e] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7f7a769d 
>>> 2014-06-05 12:01:00,661 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [6949bbc8] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 610dcd45 
>>> 2014-06-05 12:01:00,663 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-94) [21aa560a] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:01 PM 
>>> 2014-06-05 12:02:12,172 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-50) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:02:15,245 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-37) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:02:30,697 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-9) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:02:30,724 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-2) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:03:07,860 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-39) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:04:27,836 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-19) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:04:27,841 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-50) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:05:00,296 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-28) [10a45e4f] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:05:00,296 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-28) [10a45e4f] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:05:00,299 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-28) [9bf7039] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:05:00,304 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-28) [9bf7039] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:05 PM 
>>> 2014-06-05 12:05:00,441 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [3164009c] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:05:00,446 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-29) [1a51a7f5] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:05:00,450 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [19f33713] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:05:00,454 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [3dd2802] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:05:00,577 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [3164009c] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 7ea7832a 
>>> 2014-06-05 12:05:00,581 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-29) [1a51a7f5] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 143f44d4 
>>> 2014-06-05 12:05:00,580 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [3dd2802] START, 
>>> ConnectStorageServerVDSCommand(HostName = node1.test.now, HostId = 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 4058bc78 
>>> 2014-06-05 12:05:00,578 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [19f33713] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 6eb8491e 
>>> 2014-06-05 12:05:23,071 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
>>> (DefaultQuartzScheduler_Worker-10) [77f10397] VM Bench1 
>>> 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e moved from PoweringDown --> Up 
>>> 2014-06-05 12:05:23,153 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-10) [77f10397] Correlation ID: null, Call 
>>> Stack: null, Custom Event ID: -1, Message: Shutdown of VM Bench1 failed. 
>>> 2014-06-05 12:05:34,341 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
>>> (DefaultQuartzScheduler_Worker-65) [604c5b5d] VM Bench3 
>>> 6709eaa1-163f-4dc5-9101-e46870438f38 moved from PoweringDown --> Up 
>>> 2014-06-05 12:05:34,411 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-65) [604c5b5d] Correlation ID: null, Call 
>>> Stack: null, Custom Event ID: -1, Message: Shutdown of VM Bench3 failed. 
>>> 2014-06-05 12:05:35,312 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
>>> (DefaultQuartzScheduler_Worker-38) [3bbcf288] VM Bench4 
>>> 8b305f06-2c82-4a13-99f2-5beab5a056ea moved from PoweringDown --> Up 
>>> 2014-06-05 12:05:35,439 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-38) [3bbcf288] Correlation ID: null, Call 
>>> Stack: null, Custom Event ID: -1, Message: Shutdown of VM Bench4 failed. 
>>> 2014-06-05 12:05:35,663 INFO 
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] 
>>> (DefaultQuartzScheduler_Worker-64) [40fb7b62] VM Bench2 
>>> 764b60fd-c255-479b-9082-3f4f04b95cb2 moved from PoweringDown --> Up 
>>> 2014-06-05 12:05:36,101 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-64) [40fb7b62] Correlation ID: null, Call 
>>> Stack: null, Custom Event ID: -1, Message: Shutdown of VM Bench2 failed. 
>>> 2014-06-05 12:06:00,627 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [3dd2802] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 4058bc78 
>>> 2014-06-05 12:06:00,629 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-29) [1a51a7f5] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 143f44d4 
>>> 2014-06-05 12:06:00,632 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [3164009c] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7ea7832a 
>>> 2014-06-05 12:06:00,645 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [19f33713] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 6eb8491e 
>>> 2014-06-05 12:06:00,647 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-28) [9bf7039] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:06 PM 
>>> 2014-06-05 12:07:17,243 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-11) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:08:16,532 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-1) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:08:25,687 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-46) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:10:00,442 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-77) [7cace458] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:10:00,442 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-77) [7cace458] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:10:00,443 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-77) [77f9bcd3] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:10:00,444 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-77) [77f9bcd3] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:10 PM 
>>> 2014-06-05 12:10:00,597 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-37) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 3 tasks in the 
>>> queue. 
>>> 2014-06-05 12:10:00,598 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-10) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 2 tasks in the 
>>> queue. 
>>> 2014-06-05 12:10:00,599 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-35) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:10:00,600 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [7ecb4630] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:10:00,602 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [25233142] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:10:00,606 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-37) [175926fc] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:10:00,607 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [8d05e48] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:10:00,745 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [25233142] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 16b62d67 
>>> 2014-06-05 12:10:00,748 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [7ecb4630] START, 
>>> ConnectStorageServerVDSCommand(HostName = node1.test.now, HostId = 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 2bc2c88 
>>> 2014-06-05 12:10:00,748 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [8d05e48] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 4cb7e97c 
>>> 2014-06-05 12:10:00,747 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-37) [175926fc] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 442ca304 
>>> 2014-06-05 12:10:51,626 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-38) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:10:54,669 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-14) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:11:00,801 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-10) [8d05e48] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 4cb7e97c 
>>> 2014-06-05 12:11:00,811 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-37) [175926fc] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 442ca304 
>>> 2014-06-05 12:11:00,814 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [7ecb4630] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 2bc2c88 
>>> 2014-06-05 12:11:00,852 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-35) [25233142] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 16b62d67 
>>> 2014-06-05 12:11:00,854 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-77) [77f9bcd3] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:11 PM 
>>> 2014-06-05 12:11:40,654 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-25) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:12:14,657 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-31) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:12:27,462 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-37) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:13:59,781 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-22) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:15,341 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-10) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:21,429 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-35) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:21,455 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-4) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:40,159 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-17) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:52,383 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-15) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:52,412 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-27) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:14:55,658 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-42) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:15:00,031 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-65) [604c5b5d] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:15:00,031 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-65) [604c5b5d] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:15:00,032 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-65) [7d4ef42] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:15:00,035 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-65) [7d4ef42] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:15 PM 
>>> 2014-06-05 12:15:00,109 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-45) [6c18413] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:15:00,114 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [d3ad154] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:15:00,118 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-1) [32d6d63f] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:15:00,122 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-24) [718f19c7] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:15:00,247 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-24) [718f19c7] START, 
>>> ConnectStorageServerVDSCommand(HostName = node1.test.now, HostId = 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 4bba2a9a 
>>> 2014-06-05 12:15:00,255 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [d3ad154] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 4c27ad04 
>>> 2014-06-05 12:15:00,249 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-1) [32d6d63f] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 1d382d00 
>>> 2014-06-05 12:15:00,252 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-45) [6c18413] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 7daf7955 
>>> 2014-06-05 12:16:00,301 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-5) [d3ad154] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 4c27ad04 
>>> 2014-06-05 12:16:00,302 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-24) [718f19c7] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 4bba2a9a 
>>> 2014-06-05 12:16:00,305 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-45) [6c18413] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7daf7955 
>>> 2014-06-05 12:16:00,313 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-1) [32d6d63f] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 1d382d00 
>>> 2014-06-05 12:16:00,424 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-65) [7d4ef42] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:16 PM 
>>> 2014-06-05 12:18:27,725 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-30) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:18:43,043 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-48) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:18:43,064 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-9) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:18:55,383 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-40) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:18:58,450 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-31) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:19:02,314 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-6) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:20:00,088 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-58) [2e5174b2] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:20:00,088 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-58) [2e5174b2] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:20:00,089 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-58) [106f43a5] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:20:00,090 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-58) [106f43a5] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:20 PM 
>>> 2014-06-05 12:20:00,175 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-23) [4233af79] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:20:00,180 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-11) [4aa18a96] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:20:00,184 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-40) [48fedac6] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:20:00,188 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-26) [4b39df17] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:20:00,317 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-40) [48fedac6] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 2eb75130 
>>> 2014-06-05 12:20:00,321 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-26) [4b39df17] START, 
>>> ConnectStorageServerVDSCommand(HostName = node1.test.now, HostId = 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 957b133 
>>> 2014-06-05 12:20:00,313 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-11) [4aa18a96] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 28e7a51e 
>>> 2014-06-05 12:20:00,320 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-23) [4233af79] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 1e3eb653 
>>> 2014-06-05 12:21:00,371 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-40) [48fedac6] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 2eb75130 
>>> 2014-06-05 12:21:00,375 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-11) [4aa18a96] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 28e7a51e 
>>> 2014-06-05 12:21:00,376 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-26) [4b39df17] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 957b133 
>>> 2014-06-05 12:21:00,400 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-23) [4233af79] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 1e3eb653 
>>> 2014-06-05 12:21:00,403 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-58) [106f43a5] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:21 PM 
>>> 2014-06-05 12:23:10,334 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-23) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:23:34,686 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Attempting to update 
>>> VMs/Templates Ovf. 
>>> 2014-06-05 12:23:34,687 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Attempting to update VM 
>>> OVFs in Data Center Default 
>>> 2014-06-05 12:23:34,692 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Successfully updated VM 
>>> OVFs in Data Center Default 
>>> 2014-06-05 12:23:34,693 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Attempting to update 
>>> template OVFs in Data Center Default 
>>> 2014-06-05 12:23:34,695 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Successfully updated 
>>> templates OVFs in Data Center Default 
>>> 2014-06-05 12:23:34,695 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Attempting to remove 
>>> unneeded template/vm OVFs in Data Center Default 
>>> 2014-06-05 12:23:34,697 INFO [org.ovirt.engine.core.bll.OvfDataUpdater] 
>>> (DefaultQuartzScheduler_Worker-7) [74449d64] Successfully removed 
>>> unneeded template/vm OVFs in Data Center Default 
>>> 2014-06-05 12:23:38,356 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-29) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:23:44,441 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-16) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:24:00,113 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [43b031c4] Running command: 
>>> MaintenanceNumberOfVdssCommand internal: false. Entities affected : ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:24:00,146 INFO 
>>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [43b031c4] START, 
>>> SetVdsStatusVDSCommand(HostName = node1.test.now, HostId = 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26, status=PreparingForMaintenance, 
>>> nonOperationalReason=NONE, stopSpmFailureLogged=true), log id: 1d0d3a10 
>>> 2014-06-05 12:24:00,171 INFO 
>>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [43b031c4] FINISH, 
>>> SetVdsStatusVDSCommand, log id: 1d0d3a10 
>>> 2014-06-05 12:24:00,460 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [43b031c4] Running command: 
>>> MaintenanceVdsCommand internal: true. Entities affected :  ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:24:00,539 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Lock Acquired to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:24:00,622 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [2b68c51d] Running command: 
>>> MaintenanceNumberOfVdssCommand internal: true. Entities affected : ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:24:00,779 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Candidate host 
>>> node2.test.now (28fdcb5d-7acd-410e-8b65-0b4f483cb575) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:00,808 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Candidate host 
>>> node3.test.now (7415506c-cda7-4018-804d-5f6d3beddbfb) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:00,809 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Candidate host 
>>> node4.test.now (bb13752e-85cb-4945-822b-48ab2a7b1329) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:00,813 WARN 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] CanDoAction of action 
>>> InternalMigrateVm failed. 
>>> Reasons:VAR__ACTION__MIGRATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node2.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node4.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node3.test.now,$filterName Memory,SCHEDULING_HOST_FILTERED_REASON 
>>> 2014-06-05 12:24:00,825 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Lock freed to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:24:00,860 ERROR 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] 
>>> ResourceManager::vdsMaintenance - Failed migrating desktop Bench1 
>>> 2014-06-05 12:24:00,883 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (org.ovirt.thread.pool-6-thread-44) [155d3855] Correlation ID: 43b031c4, 
>>> Job ID: 38a05481-0ced-4285-b3ee-6574ce39eb77, Call Stack: null, Custom 
>>> Event ID: -1, Message: Host node1.test.now cannot change into 
>>> maintenance mode - not all Vms have been migrated successfully. Consider 
>>> manual intervention: stopping/migrating Vms: Bench1 (User: admin). 
>>> 2014-06-05 12:24:00,894 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [2b68c51d] Running command: 
>>> MaintenanceVdsCommand internal: true. Entities affected :  ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:24:01,005 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Lock Acquired to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:24:01,147 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Candidate host 
>>> node2.test.now (28fdcb5d-7acd-410e-8b65-0b4f483cb575) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:01,155 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Candidate host 
>>> node3.test.now (7415506c-cda7-4018-804d-5f6d3beddbfb) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:01,166 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Candidate host 
>>> node4.test.now (bb13752e-85cb-4945-822b-48ab2a7b1329) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:24:01,174 WARN 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] CanDoAction of action 
>>> InternalMigrateVm failed. 
>>> Reasons:VAR__ACTION__MIGRATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node2.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node4.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node3.test.now,$filterName Memory,SCHEDULING_HOST_FILTERED_REASON 
>>> 2014-06-05 12:24:01,187 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Lock freed to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:24:01,228 ERROR 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] 
>>> ResourceManager::vdsMaintenance - Failed migrating desktop Bench1 
>>> 2014-06-05 12:24:01,238 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-69) [36b0c4c7] Correlation ID: 2b68c51d, 
>>> Job ID: 3a956cf5-ec2e-45b5-a2bc-40632496fec6, Call Stack: null, Custom 
>>> Event ID: -1, Message: Failed to switch Host node1.test.now to 
>>> Maintenance mode. 
>>> 2014-06-05 12:24:40,507 INFO [org.ovirt.engine.core.bll.StopVdsCommand] 
>>> (ajp--127.0.0.1-8702-4) [32d3da21] Lock Acquired to object EngineLock 
>>> [exclusiveLocks= key: e17a740c-47ba-4a81-99ab-6c386b572f26 value: VDS_FENCE 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:24:40,798 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (ajp--127.0.0.1-8702-4) Correlation ID: null, Call Stack: null, Custom 
>>> Event ID: -1, Message: Host node3.test.now from cluster Default was 
>>> chosen as a proxy to execute Status command on Host node1.test.now. 
>>> 2014-06-05 12:24:40,799 INFO [org.ovirt.engine.core.bll.FenceExecutor] 
>>> (ajp--127.0.0.1-8702-4) Using Host node3.test.now from cluster Default 
>>> as proxy to execute Status command on Host node1.test.now 
>>> 2014-06-05 12:24:40,800 WARN [org.ovirt.engine.core.bll.StopVdsCommand] 
>>> (ajp--127.0.0.1-8702-4) CanDoAction of action StopVds failed. 
>>> Reasons:VAR__ACTION__STOP,VDS_STATUS_NOT_VALID_FOR_STOP 
>>> 2014-06-05 12:24:40,811 INFO [org.ovirt.engine.core.bll.StopVdsCommand] 
>>> (ajp--127.0.0.1-8702-4) Lock freed to object EngineLock [exclusiveLocks= 
>>> key: e17a740c-47ba-4a81-99ab-6c386b572f26 value: VDS_FENCE 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:25:00,021 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-27) [70486be2] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:25:00,021 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-27) [70486be2] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:25:00,022 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-27) [70d58101] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:25:00,023 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-27) [70d58101] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:25 PM 
>>> 2014-06-05 12:25:00,088 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-6) [362a5fe3] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:25:00,093 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-47) [683e82b0] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:25:00,100 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-14) [6ed26267] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:25:00,196 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-6) [362a5fe3] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 4591e85a 
>>> 2014-06-05 12:25:00,197 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-14) [6ed26267] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 17ef6900 
>>> 2014-06-05 12:25:00,197 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-47) [683e82b0] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: db1af1a 
>>> 2014-06-05 12:25:00,207 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-2) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:25:19,989 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-42) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:26:00,247 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-47) [683e82b0] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: db1af1a 
>>> 2014-06-05 12:26:00,256 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-6) [362a5fe3] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 4591e85a 
>>> 2014-06-05 12:26:00,263 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-14) [6ed26267] FINISH, 
>>> ConnectStorageServerVDSCommand, return: 
>>> {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 17ef6900 
>>> 2014-06-05 12:26:00,265 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-27) [70d58101] ConnectDomainToStorage. 
>>> After Connect all hosts to pool. Time:6/5/14 12:26 PM 
>>> 2014-06-05 12:28:15,399 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-17) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:28:18,443 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-8) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:28:55,121 WARN 
>>> [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] 
>>> (org.ovirt.thread.pool-6-thread-45) Executing a command: 
>>> java.util.concurrent.FutureTask , but note that there are 1 tasks in the 
>>> queue. 
>>> 2014-06-05 12:29:04,927 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceNumberOfVdssCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [51b730a8] Running command: 
>>> MaintenanceNumberOfVdssCommand internal: true. Entities affected : ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:29:05,029 INFO 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [51b730a8] Running command: 
>>> MaintenanceVdsCommand internal: true. Entities affected :  ID: 
>>> e17a740c-47ba-4a81-99ab-6c386b572f26 Type: VDS 
>>> 2014-06-05 12:29:05,073 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Lock Acquired to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:29:05,164 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Candidate host 
>>> node2.test.now (28fdcb5d-7acd-410e-8b65-0b4f483cb575) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:29:05,167 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Candidate host 
>>> node3.test.now (7415506c-cda7-4018-804d-5f6d3beddbfb) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:29:05,170 INFO 
>>> [org.ovirt.engine.core.bll.scheduling.SchedulingManager] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Candidate host 
>>> node4.test.now (bb13752e-85cb-4945-822b-48ab2a7b1329) was filtered out 
>>> by VAR__FILTERTYPE__INTERNAL filter Memory 
>>> 2014-06-05 12:29:05,173 WARN 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] CanDoAction of action 
>>> InternalMigrateVm failed. 
>>> Reasons:VAR__ACTION__MIGRATE,VAR__TYPE__VM,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,ACTION_TYPE_FAILED_VDS_VM_MEMORY,SCHEDULING_ALL_HOSTS_FILTERED_OUT,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node2.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node4.test.now,$filterName 
>>> Memory,SCHEDULING_HOST_FILTERED_REASON,VAR__FILTERTYPE__INTERNAL,$hostName 
>>> node3.test.now,$filterName Memory,SCHEDULING_HOST_FILTERED_REASON 
>>> 2014-06-05 12:29:05,181 INFO 
>>> [org.ovirt.engine.core.bll.InternalMigrateVmCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Lock freed to object 
>>> EngineLock [exclusiveLocks= key: 63b1fd02-1fb5-44d1-b1cc-54cb5c0fdb0e 
>>> value: VM 
>>> , sharedLocks= ] 
>>> 2014-06-05 12:29:05,207 ERROR 
>>> [org.ovirt.engine.core.bll.MaintenanceVdsCommand] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] 
>>> ResourceManager::vdsMaintenance - Failed migrating desktop Bench1 
>>> 2014-06-05 12:29:05,218 INFO 
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
>>> (DefaultQuartzScheduler_Worker-51) [63736865] Correlation ID: 51b730a8, 
>>> Job ID: b8441378-051c-4af0-8802-e2efb9e5d4a4, Call Stack: null, Custom 
>>> Event ID: -1, Message: Failed to switch Host node1.test.now to 
>>> Maintenance mode. 
>>> 2014-06-05 12:30:00,018 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-52) [2e3993e6] Autorecovering 1 storage 
>>> domains 
>>> 2014-06-05 12:30:00,018 INFO 
>>> [org.ovirt.engine.core.bll.AutoRecoveryManager] 
>>> (DefaultQuartzScheduler_Worker-52) [2e3993e6] Autorecovering storage 
>>> domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53 
>>> 2014-06-05 12:30:00,021 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-52) [1585ee27] Running command: 
>>> ConnectDomainToStorageCommand internal: true. Entities affected : ID: 
>>> 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage 
>>> 2014-06-05 12:30:00,024 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] 
>>> (DefaultQuartzScheduler_Worker-52) [1585ee27] ConnectDomainToStorage. 
>>> Before Connect all hosts to pool. Time:6/5/14 12:30 PM 
>>> 2014-06-05 12:30:00,119 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-17) [6c0a8af9] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:30:00,121 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-26) [10056f73] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:30:00,134 INFO 
>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] 
>>> (org.ovirt.thread.pool-6-thread-24) [750ef0f4] Running command: 
>>> ConnectStorageToVdsCommand internal: true. Entities affected :  ID: 
>>> aaa00000-0000-0000-0000-123456789aaa Type: System 
>>> 2014-06-05 12:30:00,248 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-17) [6c0a8af9] START, 
>>> ConnectStorageServerVDSCommand(HostName = node4.test.now, HostId = 
>>> bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 1c35901d 
>>> 2014-06-05 12:30:00,250 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-24) [750ef0f4] START, 
>>> ConnectStorageServerVDSCommand(HostName = node2.test.now, HostId = 
>>> 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 1a4d78a1 
>>> 2014-06-05 12:30:00,250 INFO 
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
>>> (org.ovirt.thread.pool-6-thread-26) [10056f73] START, 
>>> ConnectStorageServerVDSCommand(HostName = node3.test.now, HostId = 
>>> 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 
>>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList 
>>> = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: 
>>> 10.100.100.105:/var/lib/exports/iso, iqn: null, vfsType: null, 
>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null 
>>> };]), log id: 60464b56 
>>> 
>>> 
>>> 
>>> _______________________________________________ 
>>> Users mailing list 
>>> Users at ovirt.org 
>>> http://lists.ovirt.org/mailman/listinfo/users 
>>> 
>> 
>> vdsm log from the host to see why shutdown failed (and later please open a bug on engine not showing that reason...) 

I don't see any failure with the shutdown
If the guest fails to shut down for whatever reason we revert back to Up after 5 mins of trying. Typically it's the guest OS misconfiguration

I don't quite follow the timeline of events about storage. Can you somehow isolate the problem?

Thanks,
michal


> 
> Hi,
> 
> The logs are filled with the same messages and errors over and over again, so I extracted the lines that appear relevant.
> 
> The problem appears to be that the iSCSI failover went wrong. I have to tweak the fencing a bit because I caused a split brain on the iSCSI cluster. 
> 
> I recovered the storage but ovirt engine didn't notice anything went wrong with the storage domain. Also the ISO domain went offline but the nfs server is running on the ovirt-engine vm without any problems. 
> I cannot get it to go up again.... When I tried to activate the ISO domain a couple of minutes ago, the iSCSI domain went offline instantly and recovered after about 10 minutes
> 
> So strange things happening...
> 
> /var/log/messages:
> 
> Jun 10 02:49:46 node1 vdsm scanDomains WARNING Metadata collection for domain path /rhev/data-center/mnt/xx.xxx.xxx.xxx:_var_lib_exports_iso timedout#012Traceback (most recent call last):#012  File "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012    sd.DOMAIN_META_DATA))#012  File       "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction#012    *args, **kwargs)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in callCrabRPCFunction#012    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll#012    raise Timeout()#012Timeout
> Jun 10 02:50:56 node1 vdsm scanDomains WARNING Metadata collection for domain path /rhev/data-center/mnt/xx.xxx.xxx.xxx:_var_lib_exports_iso timedout#012Traceback (most recent call last):#012  File "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012    sd.DOMAIN_META_DATA))#012  File       "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction#012    *args, **kwargs)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in callCrabRPCFunction#012    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll#012    raise Timeout()#012Timeout
> Jun 10 02:52:06 node1 vdsm scanDomains WARNING Metadata collection for domain path /rhev/data-center/mnt/xx.xxx.xxx.xxx:_var_lib_exports_iso timedout#012Traceback (most recent call last):#012  File "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012    sd.DOMAIN_META_DATA))#012  File       "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction#012    *args, **kwargs)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in callCrabRPCFunction#012    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll#012    raise Timeout()#012Timeout
> Jun 10 02:53:16 node1 vdsm scanDomains WARNING Metadata collection for domain path /rhev/data-center/mnt/xx.xxx.xxx.xxx:_var_lib_exports_iso timedout#012Traceback (most recent call last):#012  File "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles#012    sd.DOMAIN_META_DATA))#012  File       "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction#012    *args, **kwargs)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in callCrabRPCFunction#012    rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)#012  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll#012    raise Timeout()#012Timeout
> Jun 10 02:53:32 node1 dhclient[18102]: DHCPREQUEST on bond0 to 10.100.100.106 port 67 (xid=0x2844d170)
> Jun 10 02:53:32 node1 dhclient[18102]: DHCPACK from 10.100.100.106 (xid=0x2844d170)
> Jun 10 02:53:34 node1 dhclient[18102]: bound to 10.100.100.53 -- renewal in 269 seconds.
> 
> /var/log/vdsm/vdsm.log:
> 
> Thread-15::DEBUG::2014-06-10 02:50:52,084::blockSD::600::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct if=/dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/metadata bs=4096 count=1' (cwd None)
> Thread-15::DEBUG::2014-06-10 02:50:52,094::blockSD::600::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB) copied, 0.000820898 s, 5.0 MB/s\n'; <rc> = 0
> Thread-5167::DEBUG::2014-06-10 02:50:52,191::task::595::TaskManager.Task::(_updateState) Task=`77271c2c-b1c7-4fed-a992-da7cd677c09a`::moving from state init -> state preparing
> Thread-5167::INFO::2014-06-10 02:50:52,192::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-5167::INFO::2014-06-10 02:50:52,192::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '65.2', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '0.1', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:52,192::task::1185::TaskManager.Task::(prepare) Task=`77271c2c-b1c7-4fed-a992-da7cd677c09a`::finished: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '65.2', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '0.1', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:52,192::task::595::TaskManager.Task::(_updateState) Task=`77271c2c-b1c7-4fed-a992-da7cd677c09a`::moving from state preparing -> state finished
> Thread-5167::DEBUG::2014-06-10 02:50:52,193::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-5167::DEBUG::2014-06-10 02:50:52,193::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-5167::DEBUG::2014-06-10 02:50:52,193::task::990::TaskManager.Task::(_decref) Task=`77271c2c-b1c7-4fed-a992-da7cd677c09a`::ref 0 aborting False
> VM Channels Listener::DEBUG::2014-06-10 02:50:54,321::vmChannels::91::vds::(_handle_timeouts) Timeout on fileno 46.
> Thread-5167::DEBUG::2014-06-10 02:50:55,381::task::595::TaskManager.Task::(_updateState) Task=`458009da-0516-40d0-a02c-352377f4108f`::moving from state init -> state preparing
> Thread-5167::INFO::2014-06-10 02:50:55,381::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-5167::INFO::2014-06-10 02:50:55,381::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '68.4', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '3.3', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:55,381::task::1185::TaskManager.Task::(prepare) Task=`458009da-0516-40d0-a02c-352377f4108f`::finished: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '68.4', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '3.3', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:55,382::task::595::TaskManager.Task::(_updateState) Task=`458009da-0516-40d0-a02c-352377f4108f`::moving from state preparing -> state finished
> Thread-5167::DEBUG::2014-06-10 02:50:55,382::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-5167::DEBUG::2014-06-10 02:50:55,382::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-5167::DEBUG::2014-06-10 02:50:55,382::task::990::TaskManager.Task::(_decref) Task=`458009da-0516-40d0-a02c-352377f4108f`::ref 0 aborting False
> Thread-12128::WARNING::2014-06-10 02:50:56,969::fileSD::673::scanDomains::(collectMetaFiles) Metadata collection for domain path /rhev/data-center/mnt/xx.xxx.xxx.xxx:_var_lib_exports_iso timedout
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/fileSD.py", line 662, in collectMetaFiles
>     sd.DOMAIN_META_DATA))
>   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 297, in callCrabRPCFunction
>     *args, **kwargs)
>   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 184, in callCrabRPCFunction
>     rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
>   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 150, in _recvAll
>     raise Timeout()
> Timeout
> Thread-40::ERROR::2014-06-10 02:50:56,970::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 9923a5a1-61e0-4edb-a04d-22c962190c53 not found
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
>     dom = findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 132, in findDomain
>     return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomainPath
>     raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist: (u'9923a5a1-61e0-4edb-a04d-22c962190c53',)
> Thread-40::ERROR::2014-06-10 02:50:56,971::domainMonitor::239::Storage.DomainMonitorThread::(_monitorDomain) Error while collecting domain 9923a5a1-61e0-4edb-a04d-22c962190c53 monitoring information
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/domainMonitor.py", line 215, in _monitorDomain
>     self.domain.selftest()
>   File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__
>     return getattr(self.getRealDomain(), attrName)
>   File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain
>     return self._cache._realProduce(self._sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 122, in _realProduce
>     domain = self._findDomain(sdUUID)
>   File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
>     dom = findMethod(sdUUID)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 132, in findDomain
>     return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>   File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomainPath
>     raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist: (u'9923a5a1-61e0-4edb-a04d-22c962190c53',)
> Thread-5167::DEBUG::2014-06-10 02:50:58,602::task::595::TaskManager.Task::(_updateState) Task=`6a746109-728c-4029-b4a2-7b47981f3dba`::moving from state       init -> state preparing
> Thread-5167::INFO::2014-06-10 02:50:58,602::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-5167::INFO::2014-06-10 02:50:58,603::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '6.5', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:58,603::task::1185::TaskManager.Task::(prepare) Task=`6a746109-728c-4029-b4a2-7b47981f3dba`::finished: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '1.6', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '6.5', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:50:58,603::task::595::TaskManager.Task::(_updateState) Task=`6a746109-728c-4029-b4a2-7b47981f3dba`::moving from state       preparing -> state finished
> Thread-5167::DEBUG::2014-06-10 02:50:58,603::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-5167::DEBUG::2014-06-10 02:50:58,603::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-5167::DEBUG::2014-06-10 02:50:58,604::task::990::TaskManager.Task::(_decref) Task=`6a746109-728c-4029-b4a2-7b47981f3dba`::ref 0 aborting False
> Thread-5167::DEBUG::2014-06-10 02:51:01,969::task::595::TaskManager.Task::(_updateState) Task=`0f9bf6e2-ea86-4b4a-8fe2-d7634705429c`::moving from state init -> state preparing
> Thread-5167::INFO::2014-06-10 02:51:01,969::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-5167::INFO::2014-06-10 02:51:01,969::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '5.0', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '9.8', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:51:01,970::task::1185::TaskManager.Task::(prepare) Task=`0f9bf6e2-ea86-4b4a-8fe2-d7634705429c`::finished: {u'9923a5a1-61e0-4edb-a04d-22c962190c53': {'code': 358, 'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '5.0', 'valid': False}, '0751cdb1-0ee3-494c-88e2-c0e7df58aa52': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000820898', 'lastCheck': '9.8', 'valid': True}}
> Thread-5167::DEBUG::2014-06-10 02:51:01,970::task::595::TaskManager.Task::(_updateState) Task=`0f9bf6e2-ea86-4b4a-8fe2-d7634705429c`::moving from state preparing -> state finished
> Thread-5167::DEBUG::2014-06-10 02:51:01,970::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-5167::DEBUG::2014-06-10 02:51:01,970::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-5167::DEBUG::2014-06-10 02:51:01,970::task::990::TaskManager.Task::(_decref) Task=`0f9bf6e2-ea86-4b4a-8fe2-d7634705429c`::ref 0 aborting False
> 
> 
> /var/log/sanlock.log:
> 
> 2014-05-19 09:49:00+0000 38 [10499]: sanlock daemon started 2.8 host d185aced-10ef-4d82-9b9f-74ccfa0c4f9c.localhost.
> 2014-05-19 09:54:54+0000 64 [13495]: sanlock daemon started 2.8 host cf089115-ac2c-4598-b6a2-e7db8c9368c5.node1.n
> 2014-05-19 09:56:49+0000 178 [14390]: sanlock daemon started 2.8 host 968a9da8-76ef-4440-aad7-d4cd48b9d1f9.node1.n
> 2014-05-19 10:02:36+0000 525 [14396]: s1 lockspace 0751cdb1-0ee3-494c-88e2-c0e7df58aa52:4:/dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids:0
> 2014-05-22 11:53:26+0000 266375 [19131]: 0751cdb1 aio timeout 0 0x7f4fbc0008c0:0x7f4fbc0008d0:0x7f4fceff3000 ioto 10 to_count 1
> 2014-05-22 11:53:26+0000 266375 [19131]: s1 delta_renew read rv -202 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:53:26+0000 266375 [19131]: s1 renewal error -202 delta_length 10 last_success 266344
> 2014-05-22 11:53:37+0000 266386 [19131]: 0751cdb1 aio timeout 0 0x7f4fbc000910:0x7f4fbc000920:0x7f4fcedf0000 ioto 10 to_count 2
> 2014-05-22 11:53:37+0000 266386 [19131]: s1 delta_renew read rv -202 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:53:37+0000 266386 [19131]: s1 renewal error -202 delta_length 11 last_success 266344
> 2014-05-22 11:53:48+0000 266397 [19131]: 0751cdb1 aio timeout 0 0x7f4fbc000960:0x7f4fbc000970:0x7f4fcecee000 ioto 10 to_count 3
> 2014-05-22 11:53:48+0000 266397 [19131]: s1 delta_renew read rv -202 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:53:48+0000 266397 [19131]: s1 renewal error -202 delta_length 11 last_success 266344
> 2014-05-22 11:53:55+0000 266404 [14390]: s1 check_our_lease warning 60 last_success 266344
> 2014-05-22 11:53:56+0000 266405 [14390]: s1 check_our_lease warning 61 last_success 266344
> 2014-05-22 11:53:57+0000 266406 [14390]: s1 check_our_lease warning 62 last_success 266344
> 2014-05-22 11:53:58+0000 266407 [14390]: s1 check_our_lease warning 63 last_success 266344
> 2014-05-22 11:53:59+0000 266408 [14390]: s1 check_our_lease warning 64 last_success 266344
> 2014-05-22 11:53:59+0000 266408 [19131]: 0751cdb1 aio timeout 0 0x7f4fbc0009b0:0x7f4fbc0009c0:0x7f4fcebec000 ioto 10 to_count 4
> 2014-05-22 11:53:59+0000 266408 [19131]: s1 delta_renew read rv -202 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:53:59+0000 266408 [19131]: s1 renewal error -202 delta_length 11 last_success 266344
> 2014-05-22 11:54:00+0000 266409 [14390]: s1 check_our_lease warning 65 last_success 266344
> 2014-05-22 11:54:01+0000 266410 [14390]: s1 check_our_lease warning 66 last_success 266344
> 2014-05-22 11:54:02+0000 266411 [14390]: s1 check_our_lease warning 67 last_success 266344
> 2014-05-22 11:54:03+0000 266412 [14390]: s1 check_our_lease warning 68 last_success 266344
> 2014-05-22 11:54:04+0000 266413 [14390]: s1 check_our_lease warning 69 last_success 266344
> 2014-05-22 11:54:05+0000 266414 [14390]: s1 check_our_lease warning 70 last_success 266344
> 2014-05-22 11:54:06+0000 266415 [14390]: s1 check_our_lease warning 71 last_success 266344
> 2014-05-22 11:54:07+0000 266416 [14390]: s1 check_our_lease warning 72 last_success 266344
> 2014-05-22 11:54:08+0000 266417 [14390]: s1 check_our_lease warning 73 last_success 266344
> 2014-05-22 11:54:09+0000 266418 [14390]: s1 check_our_lease warning 74 last_success 266344
> 2014-05-22 11:54:10+0000 266419 [14390]: s1 check_our_lease warning 75 last_success 266344
> 2014-05-22 11:54:10+0000 266419 [19131]: s1 delta_renew read rv -2 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:54:10+0000 266419 [19131]: s1 renewal error -2 delta_length 10 last_success 266344
> 2014-05-22 11:54:11+0000 266420 [14390]: s1 check_our_lease warning 76 last_success 266344
> 2014-05-22 11:54:12+0000 266421 [14390]: s1 check_our_lease warning 77 last_success 266344
> 2014-05-22 11:54:13+0000 266422 [14390]: s1 check_our_lease warning 78 last_success 266344
> 2014-05-22 11:54:14+0000 266423 [14390]: s1 check_our_lease warning 79 last_success 266344
> 2014-05-22 11:54:15+0000 266424 [14390]: s1 check_our_lease failed 80
> 2014-05-22 11:54:15+0000 266424 [14390]: s1 all pids clear
> 2014-05-22 11:54:20+0000 266430 [19131]: s1 delta_renew read rv -2 offset 0 /dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids
> 2014-05-22 11:54:20+0000 266430 [19131]: s1 renewal error -2 delta_length 10 last_success 266344
> 2014-05-22 11:54:30+0000 266440 [19131]: 0751cdb1 close_task_aio 0 0x7f4fbc0008c0 busy
> 2014-05-22 11:54:30+0000 266440 [19131]: 0751cdb1 close_task_aio 1 0x7f4fbc000910 busy
> 2014-05-22 11:54:30+0000 266440 [19131]: 0751cdb1 close_task_aio 2 0x7f4fbc000960 busy
> 2014-05-22 11:54:30+0000 266440 [19131]: 0751cdb1 close_task_aio 3 0x7f4fbc0009b0 busy
> 2014-05-22 11:54:40+0000 266450 [19131]: 0751cdb1 close_task_aio 0 0x7f4fbc0008c0 busy
> 2014-05-22 11:54:40+0000 266450 [19131]: 0751cdb1 close_task_aio 1 0x7f4fbc000910 busy
> 2014-05-22 11:54:40+0000 266450 [19131]: 0751cdb1 close_task_aio 2 0x7f4fbc000960 busy
> 2014-05-22 11:54:40+0000 266450 [19131]: 0751cdb1 close_task_aio 3 0x7f4fbc0009b0 busy
> 2014-05-22 11:54:50+0000 266460 [19131]: 0751cdb1 close_task_aio 0 0x7f4fbc0008c0 busy
> 2014-05-22 11:54:50+0000 266460 [19131]: 0751cdb1 close_task_aio 1 0x7f4fbc000910 busy
> 2014-05-22 11:54:50+0000 266460 [19131]: 0751cdb1 close_task_aio 2 0x7f4fbc000960 busy
> 2014-05-22 11:54:50+0000 266460 [19131]: 0751cdb1 close_task_aio 3 0x7f4fbc0009b0 busy
> 2014-05-22 11:55:00+0000 266470 [19131]: 0751cdb1 close_task_aio 0 0x7f4fbc0008c0 busy
> 2014-05-22 11:55:00+0000 266470 [19131]: 0751cdb1 close_task_aio 1 0x7f4fbc000910 busy
> 2014-05-22 11:55:00+0000 266470 [19131]: 0751cdb1 close_task_aio 2 0x7f4fbc000960 busy
> 2014-05-22 11:55:00+0000 266470 [19131]: 0751cdb1 close_task_aio 3 0x7f4fbc0009b0 busy
> 2014-05-22 11:55:10+0000 266480 [19131]: 0751cdb1 close_task_aio 0 0x7f4fbc0008c0 busy
> 2014-05-22 11:55:10+0000 266480 [19131]: 0751cdb1 close_task_aio 1 0x7f4fbc000910 busy
> 2014-05-22 11:55:10+0000 266480 [19131]: 0751cdb1 close_task_aio 2 0x7f4fbc000960 busy
> 2014-05-22 11:55:10+0000 266480 [19131]: 0751cdb1 close_task_aio 3 0x7f4fbc0009b0 busy
> 2014-05-22 11:55:16+0000 266485 [19131]: 0751cdb1 aio collect 0x7f4fbc0008c0:0x7f4fbc0008d0:0x7f4fceff3000 result -5:0 close free
> 2014-05-22 11:55:16+0000 266485 [19131]: 0751cdb1 aio collect 0x7f4fbc000910:0x7f4fbc000920:0x7f4fcedf0000 result -5:0 close free
> 2014-05-22 11:55:16+0000 266485 [19131]: 0751cdb1 aio collect 0x7f4fbc000960:0x7f4fbc000970:0x7f4fcecee000 result -5:0 close free
> 2014-05-22 11:55:16+0000 266485 [19131]: 0751cdb1 aio collect 0x7f4fbc0009b0:0x7f4fbc0009c0:0x7f4fcebec000 result -5:0 close free
> 2014-05-22 12:57:13+0000 270202 [14395]: s2 lockspace 0751cdb1-0ee3-494c-88e2-c0e7df58aa52:4:/dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids:0
> 2014-06-10 08:11:20+0000 81 [15201]: sanlock daemon started 2.8 host 195b1c86-5d28-4b6a-a31c-924495d6a908.node1.n
> 2014-06-10 08:13:49+0000 230 [15207]: s1 lockspace 0751cdb1-0ee3-494c-88e2-c0e7df58aa52:4:/dev/0751cdb1-0ee3-494c-88e2-c0e7df58aa52/ids:0
> 
> /var/log/ovirt-engine.log:
> 
> 2014-06-10 10:49:27,492 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-42) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:49:27,514 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-19) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:50:00,265 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-57) [da9c949] Autorecovering 1 storage domains
> 2014-06-10 10:50:00,265 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-57) [da9c949] Autorecovering storage domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53
> 2014-06-10 10:50:00,266 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-57) [300d83fb] Running command: ConnectDomainToStorageCommand internal: true. Entities affected :  ID: 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage
> 2014-06-10 10:50:00,266 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-57) [300d83fb] ConnectDomainToStorage. Before Connect all hosts to pool. Time:6/10/14 10:50 AM
> 2014-06-10 10:50:00,466 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-47) [358da518] Running command:       ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:00,470 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-49) [2aa454] Running command: ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:00,474 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-38) [6156f20d] Running command: ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:00,570 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-38) [6156f20d] START, ConnectStorageServerVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 5ac9e75d
> 2014-06-10 10:50:00,573 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2aa454] START,       ConnectStorageServerVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 11873507
> 2014-06-10 10:50:00,572 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-47) [358da518] START, ConnectStorageServerVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 487c35ad
> 2014-06-10 10:50:04,526 INFO  [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (ajp--127.0.0.1-8702-1) [52284e89] Lock Acquired to object EngineLock [exclusiveLocks= key: 9923a5a1-61e0-4edb-a04d-22c962190c53 value: STORAGE
> , sharedLocks= ]
> 2014-06-10 10:50:05,865 INFO  [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] Running command: ActivateStorageDomainCommand internal: false. Entities affected :  ID: 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage
> 2014-06-10 10:50:06,010 INFO  [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] Lock freed to object EngineLock [exclusiveLocks= key: 9923a5a1-61e0-4edb-a04d-22c962190c53 value: STORAGE
> , sharedLocks= ]
> 2014-06-10 10:50:06,010 INFO  [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] ActivateStorage Domain. Before Connect all hosts to pool. Time:6/10/14 10:50 AM
> 2014-06-10 10:50:06,062 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-13) [649a51c3] Running command:       ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:06,066 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-14) [27b7ad5a] Running command:       ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:06,070 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-11) [65c2858f] Running command: ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 10:50:06,176 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-13) [649a51c3] START, ConnectStorageServerVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 1561dfc
> 2014-06-10 10:50:06,199 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-14) [27b7ad5a] START, ConnectStorageServerVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: d51a4d5
> 2014-06-10 10:50:06,206 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-11) [65c2858f] START, ConnectStorageServerVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 48278314
> 2014-06-10 10:51:00,615 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-47) [358da518] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 487c35ad
> 2014-06-10 10:51:00,620 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [2aa454] FINISH,       ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 11873507
> 2014-06-10 10:51:00,623 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-38) [6156f20d] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 5ac9e75d
> 2014-06-10 10:51:00,625 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-57) [300d83fb] ConnectDomainToStorage. After Connect all hosts to pool. Time:6/10/14 10:51 AM
> 2014-06-10 10:51:03,733 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-45) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:51:06,229 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-13) [649a51c3] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 1561dfc
> 2014-06-10 10:51:06,271 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-14) [27b7ad5a] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: d51a4d5
> 2014-06-10 10:51:06,273 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-11) [65c2858f] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 48278314
> 2014-06-10 10:51:06,275 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] START, ActivateStorageDomainVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000390, ignoreFailoverLimit = false, storageDomainId = 9923a5a1-61e0-4edb-a04d-22c962190c53), log id: 23981a1b
> 2014-06-10 10:51:13,278 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-50) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:51:22,474 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-6) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:51:35,071 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-35) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:52:34,068 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] Failed in ActivateStorageDomainVDS method
> 2014-06-10 10:52:34,069 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage domain does not exist: ('9923a5a1-61e0-4edb-a04d-22c962190c53',), code = 358
> 2014-06-10 10:52:34,146 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] START, SpmStopVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 00000002-0002-0002-0002-000000000390), log id: 2c173efc
> 2014-06-10 10:52:34,202 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] SpmStopVDSCommand::Stopping SPM on vds node2.test.me, pool id 00000002-0002-0002-0002-000000000390
> 2014-06-10 10:52:38,260 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] FINISH, SpmStopVDSCommand, log id: 2c173efc
> 2014-06-10 10:52:38,261 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [52284e89] Irs placed on server 28fdcb5d-7acd-410e-8b65-0b4f483cb575 failed. Proceed Failover
> 2014-06-10 10:52:38,327 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-000000000390 Type: StoragePool
> 2014-06-10 10:52:38,329 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Storage Pool 00000002-0002-0002-0002-000000000390 - Updating Storage Domain 0751cdb1-0ee3-494c-88e2-c0e7df58aa52 status from Active to Unknown, reason : null
> 2014-06-10 10:52:38,402 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Correlation ID: 6808e2ec, Call Stack: null, Custom Event ID: -1, Message: Data Center is being initialized, please wait for initialization to complete.
> 2014-06-10 10:52:38,409 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] hostFromVds::selectedVds - node3.test.me, spmStatus Free, storage pool Default
> 2014-06-10 10:52:38,412 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] SPM Init: could not find reported vds or not up - pool:Default vds_spm_id: 3
> 2014-06-10 10:52:38,452 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] SPM selection - vds seems as spm node2.test.me
> 2014-06-10 10:52:38,482 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] START, SpmStopVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 00000002-0002-0002-0002-000000000390), log id: 4c424d7c
> 2014-06-10 10:52:38,491 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value 
>  
> TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
> 
> 2014-06-10 10:52:38,509 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] HostName = node2.test.me
> 2014-06-10 10:52:38,509 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Command HSMGetAllTasksStatusesVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM
> 2014-06-10 10:52:38,510 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] FINISH, SpmStopVDSCommand, log id: 4c424d7c
> 2014-06-10 10:52:38,510 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] spm stop succeeded, continuing with spm selection
> 2014-06-10 10:52:38,524 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] starting spm on vds node3.test.me, storage pool Default, prevId 3, LVER 2
> 2014-06-10 10:52:38,542 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] START, SpmStartVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000002-0002-0002-0002-000000000390, prevId=3, prevLVER=2, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=true), log id: 3056d822
> 2014-06-10 10:52:38,555 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] spmStart polling started: taskId = 34c9a402-a886-41e3-bf64-52f554669aba
> 2014-06-10 10:52:39,301 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-22) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:52:39,324 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-42) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:52:52,175 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-10) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:53:41,630 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-13) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:00,852 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-36) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:06,936 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-43) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:09,979 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-22) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:43,966 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-6) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:53,204 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-43) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:54:56,248 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-22) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:55:26,994 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-47) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:55:30,037 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-38) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:55:30,058 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-10) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:56:50,452 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] spmStart polling ended: taskId = 34c9a402-a886-41e3-bf64-52f554669aba task status = finished
> 2014-06-10 10:56:50,498 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] spmStart polling ended, spm status: SPM
> 2014-06-10 10:56:50,517 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] START, HSMClearTaskVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, taskId=34c9a402-a886-41e3-bf64-52f554669aba), log id: 2e6d491a
> 2014-06-10 10:56:50,573 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] FINISH, HSMClearTaskVDSCommand, log id: 2e6d491a
> 2014-06-10 10:56:50,574 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult at 6e67a4c7, log id: 3056d822
> 2014-06-10 10:56:50,597 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Initialize Irs proxy from vds: 10.100.100.48
> 2014-06-10 10:56:50,622 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host node3.test.me (Address: 10.100.100.48).
> 2014-06-10 10:56:50,644 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000390, ignoreFailoverLimit = false), log id: 7eba1136
> 2014-06-10 10:57:48,685 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-40) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:58:28,613 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-27) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:58:34,731 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-39) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 10:58:49,587 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Failed in ActivateStorageDomainVDS method
> 2014-06-10 10:58:49,588 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage domain does not exist: ('9923a5a1-61e0-4edb-a04d-22c962190c53',), code = 358
> 2014-06-10 10:58:49,666 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] START, SpmStopVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000002-0002-0002-0002-000000000390), log id: 3a60caf5
> 2014-06-10 10:58:49,711 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] SpmStopVDSCommand::Stopping SPM on vds node3.test.me, pool id 00000002-0002-0002-0002-000000000390
> 2014-06-10 10:58:52,611 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] FINISH, SpmStopVDSCommand, log id: 3a60caf5
> 2014-06-10 10:58:52,611 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [6808e2ec] Irs placed on server 7415506c-cda7-4018-804d-5f6d3beddbfb failed. Proceed Failover
> 2014-06-10 10:58:52,668 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: 00000002-0002-0002-0002-000000000390 Type: StoragePool
> 2014-06-10 10:58:52,991 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Correlation ID: 69ff8cdd, Call Stack: null, Custom Event ID: -1, Message: Data Center is being initialized, please wait for initialization to complete.
> 2014-06-10 10:58:52,997 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] hostFromVds::selectedVds - node4.test.me, spmStatus Free, storage pool Default
> 2014-06-10 10:58:53,000 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] SPM Init: could not find reported vds or not up - pool:Default vds_spm_id: 1
> 2014-06-10 10:58:53,017 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] SPM selection - vds seems as spm node3.test.me
> 2014-06-10 10:58:53,035 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] START, SpmStopVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000002-0002-0002-0002-000000000390), log id: 26ccb709
> 2014-06-10 10:58:53,043 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand return value 
>  
> TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654, mMessage=Not SPM]]
> 
> 2014-06-10 10:58:53,060 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] HostName = node3.test.me
> 2014-06-10 10:58:53,061 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Command HSMGetAllTasksStatusesVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb) execution failed. Exception: IRSNonOperationalException: IRSGenericException: IRSErrorException: IRSNonOperationalException: Not SPM
> 2014-06-10 10:58:53,061 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] FINISH, SpmStopVDSCommand, log id: 26ccb709
> 2014-06-10 10:58:53,062 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] spm stop succeeded, continuing with spm selection
> 2014-06-10 10:58:53,091 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] starting spm on vds node4.test.me, storage pool Default, prevId 1, LVER 3
> 2014-06-10 10:58:53,108 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] START, SpmStartVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 00000002-0002-0002-0002-000000000390, prevId=1, prevLVER=3, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=true), log id: cf0e42f
> 2014-06-10 10:58:53,117 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] spmStart polling started: taskId = 95b6b948-b7b8-46cb-8cc1-cf9bde431874
> 2014-06-10 11:01:46,832 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-8) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:01:55,979 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-25) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:02:51,420 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-42) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:02:54,477 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-23) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:03:06,978 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-15) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:03:10,033 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-41) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:03:50,450 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] spmStart polling ended: taskId = 95b6b948-b7b8-46cb-8cc1-cf9bde431874 task status = finished
> 2014-06-10 11:03:50,511 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] spmStart polling ended, spm status: SPM
> 2014-06-10 11:03:50,528 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] START, HSMClearTaskVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329, taskId=95b6b948-b7b8-46cb-8cc1-cf9bde431874), log id: 2d9cb3dc
> 2014-06-10 11:03:50,588 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] FINISH, HSMClearTaskVDSCommand, log id: 2d9cb3dc
> 2014-06-10 11:03:50,588 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult at 6b9b5783, log id: cf0e42f
> 2014-06-10 11:03:50,890 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Initialize Irs proxy from vds: 10.100.100.59
> 2014-06-10 11:03:50,935 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host node4.test.me (Address: 10.100.100.59).
> 2014-06-10 11:03:50,980 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 00000002-0002-0002-0002-000000000390, ignoreFailoverLimit = false), log id: 6ff3e3ea
> 2014-06-10 11:04:29,664 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-23) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:04:32,809 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-17) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:04:54,324 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-50) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:04:57,368 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-37) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:05:00,433 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-19) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:05:49,611 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Failed in ActivateStorageDomainVDS method
> 2014-06-10 11:05:49,612 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] IrsBroker::Failed::ActivateStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage domain does not exist: ('9923a5a1-61e0-4edb-a04d-22c962190c53',), code = 358
> 2014-06-10 11:05:49,612 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] -- executeIrsBrokerCommand: Attempting on storage pool 00000002-0002-0002-0002-000000000390
> 2014-06-10 11:05:49,616 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] FINISH, ActivateStorageDomainVDSCommand, log id: 23981a1b
> 2014-06-10 11:05:49,617 ERROR [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Command org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: IRSGenericException: IRSErrorException: Failed to ActivateStorageDomainVDS, error = Storage domain does not exist: ('9923a5a1-61e0-4edb-a04d-22c962190c53',), code = 358 (Failed with error StorageDomainDoesNotExist and code 358)
> 2014-06-10 11:05:49,659 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] START, HSMGetAllTasksInfoVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329), log id: 6229449d
> 2014-06-10 11:05:49,665 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 6229449d
> 2014-06-10 11:05:49,666 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 6ff3e3ea
> 2014-06-10 11:05:49,666 INFO  [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-11) [69ff8cdd] Discovered no tasks on Storage Pool Default
> 2014-06-10 11:05:49,667 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] -- executeIrsBrokerCommand: Attempting on storage pool 00000002-0002-0002-0002-000000000390
> 2014-06-10 11:05:49,678 INFO  [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Command [id=8d0ac1fe-ed12-48ac-b8a1-13b8a5f17377]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = 00000002-0002-0002-0002-000000000390, storageId = 9923a5a1-61e0-4edb-a04d-22c962190c53, status=InActive].
> 2014-06-10 11:05:49,692 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] START,       HSMGetAllTasksInfoVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329), log id: 58f9a1ab
> 2014-06-10 11:05:49,733 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-20) [69ff8cdd] Correlation ID: 52284e89, Job ID: f4061f81-ea0d-479a-bd89-b67acae8ec1a, Call Stack: null, Custom Event ID: -1, Message: Failed to activate Storage Domain ISO (Data Center Default) by admin
> 2014-06-10 11:05:49,738 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] FINISH,       HSMGetAllTasksInfoVDSCommand, return: [], log id: 58f9a1ab
> 2014-06-10 11:05:49,744 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 7eba1136
> 2014-06-10 11:05:49,745 INFO  [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-31) [6808e2ec] Discovered no tasks on Storage Pool Default
> 2014-06-10 11:07:41,006 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-4) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:07:44,068 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-20) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:07:47,128 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-44) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:07:50,190 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-37) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:07:50,221 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-42) Executing a command: java.util.concurrent.FutureTask , but note that there are 2 tasks in the queue.
> 2014-06-10 11:07:50,222 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-19) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:09:38,306 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-41) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:09:38,328 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-20) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:09:50,816 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-17) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:09:53,859 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-47) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:09:53,881 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-16) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:10:00,118 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-31) [17ab55ad] Autorecovering 1 storage domains
> 2014-06-10 11:10:00,119 INFO  [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-31) [17ab55ad] Autorecovering storage domains id: 9923a5a1-61e0-4edb-a04d-22c962190c53
> 2014-06-10 11:10:00,119 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-31) [336d7eb4] Running command: ConnectDomainToStorageCommand internal: true. Entities affected :  ID: 9923a5a1-61e0-4edb-a04d-22c962190c53 Type: Storage
> 2014-06-10 11:10:00,120 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-31) [336d7eb4] ConnectDomainToStorage. Before Connect all hosts to pool. Time:6/10/14 11:10 AM
> 2014-06-10 11:10:00,196 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-36) Executing a command: java.util.concurrent.FutureTask , but note that there are 2 tasks in the queue.
> 2014-06-10 11:10:00,196 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-32) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:10:00,198 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-15) [5f9a3237] Running command: ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 11:10:00,199 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-32) [31a8ddb2] Running command:       ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 11:10:00,200 INFO  [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (org.ovirt.thread.pool-6-thread-36) [3fc1d201] Running command: ConnectStorageToVdsCommand internal: true. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: System
> 2014-06-10 11:10:00,321 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-15) [5f9a3237] START, ConnectStorageServerVDSCommand(HostName = node2.test.me, HostId = 28fdcb5d-7acd-410e-8b65-0b4f483cb575, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 7d344fac
> 2014-06-10 11:10:00,330 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [31a8ddb2] START, ConnectStorageServerVDSCommand(HostName = node3.test.me, HostId = 7415506c-cda7-4018-804d-5f6d3beddbfb, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 7c77503a
> 2014-06-10 11:10:00,334 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-36) [3fc1d201] START, ConnectStorageServerVDSCommand(HostName = node4.test.me, HostId = bb13752e-85cb-4945-822b-48ab2a7b1329, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: f47e9f99-9989-4297-a84f-4f75338eeced, connection: xx.xxx.xxx.xxx:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 449484a0
> 2014-06-10 11:10:21,875 WARN  [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue.
> 2014-06-10 11:11:00,372 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [31a8ddb2] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7c77503a
> 2014-06-10 11:11:00,382 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-15) [5f9a3237] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 7d344fac
> 2014-06-10 11:11:00,409 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-6-thread-36) [3fc1d201] FINISH, ConnectStorageServerVDSCommand, return: {f47e9f99-9989-4297-a84f-4f75338eeced=0}, log id: 449484a0
> 2014-06-10 11:11:00,410 INFO  [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-31) [336d7eb4] ConnectDomainToStorage. After Connect all hosts to pool. Time:6/10/14 11:11 AM
> <vdsm.log.tar.gz>_______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20140613/05d8767d/attachment-0001.html>


More information about the Users mailing list