[Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error

Hello, New Ovirt 3.3 install on Fedora 19. When I try to add a gluster storage domain I get the following: *UI error:* *Error while executing action Add Storage Connection: Permission settings on the specified path do not allow access to the storage.* *Verify permission settings on the specified storage path.* *VDSM logs contain:* Thread-393::DEBUG::2013-09-19 11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-393::DEBUG::2013-09-19 11:59:42,399::task::579::TaskManager.Task::(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init -> state preparing Thread-393::INFO::2013-09-19 11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-393::DEBUG::2013-09-19 11:59:42,405::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None) Thread-393::DEBUG::2013-09-19 11:59:42,490::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None) Thread-393::ERROR::2013-09-19 11:59:42,505::hsm::2382::Storage.HSM::(connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2379, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect raise e StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' Thread-393::DEBUG::2013-09-19 11:59:42,506::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {} Thread-393::INFO::2013-09-19 11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-393::DEBUG::2013-09-19 11:59:42,506::task::1168::TaskManager.Task::(prepare) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-393::DEBUG::2013-09-19 11:59:42,506::task::579::TaskManager.Task::(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing -> state finished Thread-393::DEBUG::2013-09-19 11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-393::DEBUG::2013-09-19 11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-393::DEBUG::2013-09-19 11:59:42,507::task::974::TaskManager.Task::(_decref) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False *Other info:* - I have two nodes, ovirt001, ovirt002 they are both Fedora 19. - The gluster bricks are replicated and located on the nodes. (ovirt001:rep2-virt, ovirt002:rep2-virt) - Local directory for the mount, I changed permissions on glusterSD to 777, it was 755, and there is nothing in that directory: [root@ovirt001 mnt]# pwd /rhev/data-center/mnt [root@ovirt001 mnt]# ll total 4 drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD I find it odd that the UUID's listed in the vdsm logs are zero's.. Appreciate any help, *Steve *

Steve, Having just installed gluster on my local hosts and seeing the exact same error in my setup. I am going to assume the following are true: 1. You made a partition just for gluster. 2. You followed oVirt 3.3, Glusterized article from Jason Brooks. I got the exact same error because for some reason the owner of the directory I put the gluster bricks in keep changing back to root instead of kvm:kvm. Each time I reboot my host, that happens, so I am assuming I didn't set up something correctly. But you can solve it by chowning the directory and everything will work again. If that doesn't help, well I don't know, I just started using it myself, I just happen to have seen the same error at some point. Alexander On Thursday, September 19, 2013 12:26:52 PM Steve Dainard wrote: Hello, New Ovirt 3.3 install on Fedora 19. When I try to add a gluster storage domain I get the following: *UI error:* /Error while executing action Add Storage Connection: Permission settings on the specified path do not allow access to the storage./ /Verify permission settings on the specified storage path./ *VDSM logs contain:* Thread-393::DEBUG::2013-09-19 11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-393::DEBUG::2013-09-19 11:59:42,399::task::579::TaskManager.Task:: (_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init -> state preparing Thread-393::INFO::2013-09-19 11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-393::DEBUG::2013-09-19 11:59:42,405::mount::226::Storage.Misc.excCmd:: (_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None) Thread-393::DEBUG::2013-09-19 11:59:42,490::mount::226::Storage.Misc.excCmd:: (_runcmd) '/usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data- center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None) Thread-393::ERROR::2013-09-19 11:59:42,505::hsm::2382::Storage.HSM:: (connectStorageServer) Could not connect to storageServer Traceback (most recent call last): File "/usr/share/vdsm/storage/hsm.py", line 2379, in connectStorageServer conObj.connect() File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect raise e StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' Thread-393::DEBUG::2013-09-19 11:59:42,506::hsm::2396::Storage.HSM:: (connectStorageServer) knownSDs: {} Thread-393::INFO::2013-09-19 11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-393::DEBUG::2013-09-19 11:59:42,506::task::1168::TaskManager.Task:: (prepare) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-393::DEBUG::2013-09-19 11:59:42,506::task::579::TaskManager.Task:: (_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing -> state finished Thread-393::DEBUG::2013-09-19 11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-393::DEBUG::2013-09-19 11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-393::DEBUG::2013-09-19 11:59:42,507::task::974::TaskManager.Task::(_decref) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False *Other info:* - I have two nodes, ovirt001, ovirt002 they are both Fedora 19. - The gluster bricks are replicated and located on the nodes. (ovirt001:rep2-virt, ovirt002:rep2-virt) - Local directory for the mount, I changed permissions on glusterSD to 777, it was 755, and there is nothing in that directory: [root@ovirt001 mnt]# pwd /rhev/data-center/mnt [root@ovirt001 mnt]# ll total 4 drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD I find it odd that the UUID's listed in the vdsm logs are zero's.. Appreciate any help, *Steve *

Il giorno 19/set/2013 19:10, "Alexander Wels" <awels@redhat.com> ha scritto:
Steve,
Having just installed gluster on my local hosts and seeing the exact same
error in my setup. I am going to assume the following are true:
1. You made a partition just for gluster.
2. You followed oVirt 3.3, Glusterized article from Jason Brooks.
I got the exact same error because for some reason the owner of the
directory I put the gluster bricks in keep changing back to root instead of kvm:kvm. Each time I reboot my host, that happens, so I am assuming I didn't set up something correctly. But you can solve it by chowning the directory and everything will work again.
If that doesn't help, well I don't know, I just started using it myself,
I just happen to have seen the same error at some point.
Alexander
On Thursday, September 19, 2013 12:26:52 PM Steve Dainard wrote:
Hello,
New Ovirt 3.3 install on Fedora 19.
When I try to add a gluster storage domain I get the following:
UI error:
Error while executing action Add Storage Connection: Permission settings
on the specified path do not allow access to the storage.
Verify permission settings on the specified storage path.
VDSM logs contain:
Thread-393::DEBUG::2013-09-19
11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-393::DEBUG::2013-09-19
11:59:42,399::task::579::TaskManager.Task::(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init -> state preparing
Thread-393::INFO::2013-09-19
11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None)
Thread-393::DEBUG::2013-09-19
11:59:42,405::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
Thread-393::DEBUG::2013-09-19
11:59:42,490::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
Thread-393::ERROR::2013-09-19
11:59:42,505::hsm::2382::Storage.HSM::(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2379, in
connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect
raise e
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
Thread-393::DEBUG::2013-09-19
11:59:42,506::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {}
Thread-393::INFO::2013-09-19
11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19
11:59:42,506::task::1168::TaskManager.Task::(prepare) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19
11:59:42,506::task::579::TaskManager.Task::(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing -> state finished
Thread-393::DEBUG::2013-09-19
11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-393::DEBUG::2013-09-19
11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-393::DEBUG::2013-09-19
11:59:42,507::task::974::TaskManager.Task::(_decref) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False
Other info:
- I have two nodes, ovirt001, ovirt002 they are both Fedora 19.
- The gluster bricks are replicated and located on the nodes.
(ovirt001:rep2-virt, ovirt002:rep2-virt)
- Local directory for the mount, I changed permissions on glusterSD to
777, it was 755, and there is nothing in that directory:
[root@ovirt001 mnt]# pwd
/rhev/data-center/mnt
[root@ovirt001 mnt]# ll
total 4
drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD
I find it odd that the UUID's listed in the vdsm logs are zero's..
Appreciate any help,
Steve
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
I had a similar problem in gluster configuration for openstack and found solution in this ovirt page: http://www.ovirt.org/Features/GlusterFS_Storage_Domain#Setting_up_a_GlusterF... In particular: If the GlusterFS volume was created manually, then ensure the below options are set on the volume, so that its accessible from oVirtvolume set <volname> storage.owner-uid=36volume set <volname> storage.owner-gid=36 Check that otherwise each time a node mounts the gluster it will reset permissions root. .. Hih Gianluca

On Thursday, September 19, 2013 07:22:08 PM Gianluca Cecchi wrote: awels@redhat.com[1]> ha scritto:>> Steve,>> >> Having just installed gluster on my local hosts and seeing the exact same error in my setup. I am going to assume the following are true:>> >> 1. You made a partition just for gluster.>> 2. You followed oVirt 3.3, Glusterized article from Jason Brooks.>> >> I got the exact same error because for some reason the owner of the directory I put the gluster bricks in keep changing back to root instead of kvm:kvm. Each time I reboot my host, that happens, so I am assuming I didn't set up something correctly. But you can solve it by chowning the directory and everything will work again.>> >> If that doesn't help, well I don't know, I just started using it myself, I just happen to have seen the same error at some point.>> >> Alexander>> >> On Thursday, September 19, 2013 12:26:52 PM Steve Dainard wrote:>> Hello,>>> New Ovirt 3.3 install on Fedora 19.>>> When I try to add a gluster storage domain I get the following:>>> UI error:>> Error while executing action Add Storage Connection: Permission settings on the specified path do not allow access to the storage.>> Verify permission settings on the specified storage path.>>> VDSM logs contain:>> Thread-393::DEBUG::2013-09-19 11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]>> Thread-393::DEBUG::2013-09-19 11:59:42,399::task::579::TaskManager.Task:: (_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init -> state preparing>> Thread-393::INFO::2013-09-19 11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None)>> Thread-393::DEBUG::2013-09-19 11:59:42,405::mount::226::Storage.Misc.excCmd:: (_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)>> Thread-393::DEBUG::2013-09-19 11:59:42,490::mount::226::Storage.Misc.excCmd:: (_runcmd) '/usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data- center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)>> Thread-393::ERROR::2013-09-19 11:59:42,505::hsm::2382::Storage.HSM:: (connectStorageServer) Could not connect to storageServer>> Traceback (most recent call last):>> File "/usr/share/vdsm/storage/hsm.py", line 2379, in connectStorageServer>> conObj.connect()>> File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect>> raise e>> StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'>> Thread-393::DEBUG::2013-09-19 11:59:42,506::hsm::2396::Storage.HSM:: (connectStorageServer) knownSDs: {}>> Thread-393::INFO::2013-09-19 11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}>> Thread-393::DEBUG::2013-09-19 11:59:42,506::task::1168::TaskManager.Task::(prepare) Task=`12c38fec-0072-4974- a8e3-9125b3908246`::finished: {'statuslist': [{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}>> Thread-393::DEBUG::2013-09-19 11:59:42,506::task::579::TaskManager.Task::(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing -> state finished>> Thread-393::DEBUG::2013-09-19 11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}>> Thread-393::DEBUG::2013-09-19 11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}>> Thread-393::DEBUG::2013-09-19 11:59:42,507::task::974::TaskManager.Task::(_decref) Task=`12c38fec-0072-4974- a8e3-9125b3908246`::ref 0 aborting False>>> Other info:>> - I have two nodes, ovirt001, ovirt002 they are both Fedora 19.>> - The gluster bricks are replicated and located on the nodes. (ovirt001:rep2-virt, ovirt002:rep2-virt)>> - Local directory for the mount, I changed permissions on glusterSD to 777, it was 755, and there is nothing in that directory:>> [root@ovirt001 mnt]# pwd>> /rhev/data-center/mnt>> [root@ovirt001 mnt]# ll>> total 4>> drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD>>> I find it odd that the UUID's listed in the vdsm logs are zero's..>>> Appreciate any help,>>>> Steve>>>>> _______________________________________________> Users mailing list> Users@ovirt.org[2] http://lists.ovirt.org/mailman/listinfo/users[3] http://www.ovirt.org/Features/GlusterFS_Storage_Domain#Setting_up_a_GlusterF... rage_volume_for_using_it_as_a_storage_domain[4] In particular: If the GlusterFS volume was created manually, then ensure the below options are set on the volume, so that its accessible from oVirtvolume set <volname> storage.owner-uid=36volume set <volname> storage.owner-gid=36 Check that otherwise each time a node mounts the gluster it will reset permissions root. ..HihGianluca Thanks, Alexander -------- [1] mailto:awels@redhat.com [2] mailto:Users@ovirt.org [3] http://lists.ovirt.org/mailman/listinfo/users [4] http://www.ovirt.org/Features/GlusterFS_Storage_Domain#Setting_up_a_GlusterF... rage_volume_for_using_it_as_a_storage_domain

Either you can use the volume set .. option as mentioned in the wikipage --or -- If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and click on "Optimize for virt. store". That should also set the volume options in addition to few other things thanx, deepak

Awesome, thanks guys. Its weird that that article tells you to set with 'key=value' rather than 'key value' must be some legacy stuff. Once those changes are in place I hit a different error. Deepak, maybe you've seen this one on new storage domain add: *[root@ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.log * 2013-09-20 13:16:36,226 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-09-20 13:16:36,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id: 672635cc 2013-09-20 13:16:36,231 ERROR [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Command org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661) 2013-09-20 13:16:36,296 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domains to Data Center Default. (User: admin@internal) 2013-09-20 13:16:36,299 INFO [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks= key: 5849b030-626e-47cb-ad90-3ce782d831b3 value: POOL , sharedLocks= ] 2013-09-20 13:16:36,387 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StoragePool; snapshot: id=5849b030-626e-47cb-ad90-3ce782d831b3. 2013-09-20 13:16:36,398 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId = 17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,425 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: id=17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,464 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID: 014ec59b-e6d7-4e5e-b588-4fb0dfa8f1c8, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domain rep2-virt to Data Center Default. (User: admin@internal) *[root@ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log * Thread-32374::DEBUG::2013-09-20 13:16:18,107::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state init -> state preparing Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::1168::TaskManager.Task::(prepare) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::finished: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state preparing -> state finished Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::974::TaskManager.Task::(_decref) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::ref 0 aborting False Thread-32379::DEBUG::2013-09-20 13:16:29,509::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32379::DEBUG::2013-09-20 13:16:29,510::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state init -> state preparing Thread-32379::INFO::2013-09-20 13:16:29,510::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-32379::DEBUG::2013-09-20 13:16:29,516::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: () Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain} Thread-32379::INFO::2013-09-20 13:16:29,523::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,523::task::1168::TaskManager.Task::(prepare) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state preparing -> state finished Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::974::TaskManager.Task::(_decref) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::ref 0 aborting False Thread-32382::DEBUG::2013-09-20 13:16:29,888::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32382::DEBUG::2013-09-20 13:16:29,888::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state init -> state preparing Thread-32382::INFO::2013-09-20 13:16:29,889::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', domainName='rep2-virt', typeSpecificArg='192.168.1.1:rep2-virt', domClass=1, domVersion='3', options=None) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:29,904::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-32382::DEBUG::2013-09-20 13:16:29,904::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::DEBUG::2013-09-20 13:16:32,931::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,255::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,258::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,259::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 17d21ac7-5859-4f25-8de7-2a9433d50c11' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,285::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found\n'; <rc> = 5 Thread-32382::WARNING::2013-09-20 13:16:33,286::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found'] Thread-32382::DEBUG::2013-09-20 13:16:33,286::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-32382::ERROR::2013-09-20 13:16:33,295::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('17d21ac7-5859-4f25-8de7-2a9433d50c11',) Thread-32382::INFO::2013-09-20 13:16:33,295::nfsSD::69::Storage.StorageDomain::(create) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 domainName=rep2-virt remotePath=192.168.1.1:rep2-virt domClass=1 Thread-32382::DEBUG::2013-09-20 13:16:33,430::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,445::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[] Thread-32382::WARNING::2013-09-20 13:16:33,446::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is Thread-32382::DEBUG::2013-09-20 13:16:33,447::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32382::DEBUG::2013-09-20 13:16:33,448::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32382::DEBUG::2013-09-20 13:16:33,449::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,454::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32382::DEBUG::2013-09-20 13:16:33,457::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/192.168.1.1: rep2-virt/17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,457::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,469::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,472::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-32382::DEBUG::2013-09-20 13:16:33,472::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_imageNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_volumeNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::clusterlock::137::initSANLock::(initSANLock) Initializing SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32387::DEBUG::2013-09-20 13:16:33,717::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state init -> state preparing Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::1168::TaskManager.Task::(prepare) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::finished: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state preparing -> state finished Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32387::DEBUG::2013-09-20 13:16:33,719::task::974::TaskManager.Task::(_decref) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::ref 0 aborting False Thread-32382::ERROR::2013-09-20 13:16:34,126::clusterlock::145::initSANLock::(initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING::2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-32382::DEBUG::2013-09-20 13:16:34,127::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32382::INFO::2013-09-20 13:16:34,128::logUtils::47::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::1168::TaskManager.Task::(prepare) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::finished: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state preparing -> state finished Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32382::DEBUG::2013-09-20 13:16:34,129::task::974::TaskManager.Task::(_decref) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::ref 0 aborting False Thread-32389::DEBUG::2013-09-20 13:16:34,219::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32389::DEBUG::2013-09-20 13:16:34,219::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state init -> state preparing Thread-32389::INFO::2013-09-20 13:16:34,220::logUtils::44::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', options=None) Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at 'getStorageDomainStats' Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'shared' Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'shared' (1 active user) Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Granted request Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::811::TaskManager.Task::(resourceAcquired) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (shared) Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 1 aborting False Thread-32389::INFO::2013-09-20 13:16:34,223::logUtils::47::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::1168::TaskManager.Task::(prepare) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state preparing -> state finished Thread-32389::DEBUG::2013-09-20 13:16:34,223::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32389::DEBUG::2013-09-20 13:16:34,225::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 0 aborting False Thread-32390::DEBUG::2013-09-20 13:16:35,099::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32390::DEBUG::2013-09-20 13:16:35,099::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state init -> state preparing Thread-32390::INFO::2013-09-20 13:16:35,099::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}], options=None) Thread-32390::DEBUG::2013-09-20 13:16:35,105::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32390::DEBUG::2013-09-20 13:16:35,112::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', '17d21ac7-5859-4f25-8de7-2a9433d50c11') Thread-32390::DEBUG::2013-09-20 13:16:35,113::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32390::INFO::2013-09-20 13:16:35,113::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::1168::TaskManager.Task::(prepare) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::finished: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state preparing -> state finished Thread-32390::DEBUG::2013-09-20 13:16:35,113::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::task::974::TaskManager.Task::(_decref) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::ref 0 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,148::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32393::DEBUG::2013-09-20 13:16:35,148::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state init -> state preparing Thread-32393::INFO::2013-09-20 13:16:35,148::logUtils::44::dispatcher::(wrapper) Run and protect: createStoragePool(poolType=None, spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default', masterDom='17d21ac7-5859-4f25-8de7-2a9433d50c11', domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'], masterVersion=9, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-32393::INFO::2013-09-20 13:16:35,149::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,161::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '954' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,163::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '956' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::INFO::2013-09-20 13:16:35,166::sp::592::Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default master_sd=17d21ac7-5859-4f25-8de7-2a9433d50c11 domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'] masterVersion=9 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-32393::INFO::2013-09-20 13:16:35,166::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,177::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,188::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::WARNING::2013-09-20 13:16:35,189::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=61b814a65ea3ede1f0ae1d58e139adc06bf9eda5'] Thread-32393::DEBUG::2013-09-20 13:16:35,194::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32393::INFO::2013-09-20 13:16:35,194::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 (id: 250) Thread-32393::ERROR::2013-09-20 13:16:36,196::task::850::TaskManager.Task::(_setError) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId(self.id) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-32393::DEBUG::2013-09-20 13:16:36,196::task::869::TaskManager.Task::(_run) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._run: 72348d40-8442-4dbf-bc66-1d354da5fc31 (None, '5849b030-626e-47cb-ad90-3ce782d831b3', 'Default', '17d21ac7-5859-4f25-8de7-2a9433d50c11', ['17d21ac7-5859-4f25-8de7-2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed - stopping task Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1194::TaskManager.Task::(stop) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::stopping in state preparing (force False) Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting True Thread-32393::INFO::2013-09-20 13:16:36,197::task::1151::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1156::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Prepare: aborted: Cannot acquire host id Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 0 aborting True Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::909::TaskManager.Task::(_doAbort) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._doAbort: force False Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state preparing -> state aborting Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::534::TaskManager.Task::(__state_aborting) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_aborting: recover policy none Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state aborting -> state failed Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>, 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-32393::ERROR::2013-09-20 13:16:36,200::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-32398::DEBUG::2013-09-20 13:16:48,921::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state init -> state preparing Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::1168::TaskManager.Task::(prepare) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::finished: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state preparing -> state finished *Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty < deepakcs@linux.vnet.ibm.com> wrote:
Either you can use the volume set .. option as mentioned in the wikipage --or -- If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and click on "Optimize for virt. store". That should also set the volume options in addition to few other things
thanx, deepak
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>

On 09/20/2013 08:30 PM, Steve Dainard wrote:
Awesome, thanks guys. Its weird that that article tells you to set with 'key=value' rather than 'key value' must be some legacy stuff.
Once those changes are in place I hit a different error. Deepak, maybe you've seen this one on new storage domain add:
*[root@ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.log * 2013-09-20 13:16:36,226 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-09-20 13:16:36,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id: 672635cc 2013-09-20 13:16:36,231 ERROR [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Command org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661) 2013-09-20 13:16:36,296 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domains to Data Center Default. (User: admin@internal) 2013-09-20 13:16:36,299 INFO [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks= key: 5849b030-626e-47cb-ad90-3ce782d831b3 value: POOL , sharedLocks= ] 2013-09-20 13:16:36,387 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StoragePool; snapshot: id=5849b030-626e-47cb-ad90-3ce782d831b3. 2013-09-20 13:16:36,398 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId = 17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,425 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: id=17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,464 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID: 014ec59b-e6d7-4e5e-b588-4fb0dfa8f1c8, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domain rep2-virt to Data Center Default. (User: admin@internal)
*[root@ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log * Thread-32374::DEBUG::2013-09-20 13:16:18,107::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state init -> state preparing Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::1168::TaskManager.Task::(prepare) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::finished: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state preparing -> state finished Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::974::TaskManager.Task::(_decref) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::ref 0 aborting False Thread-32379::DEBUG::2013-09-20 13:16:29,509::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32379::DEBUG::2013-09-20 13:16:29,510::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state init -> state preparing Thread-32379::INFO::2013-09-20 13:16:29,510::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-32379::DEBUG::2013-09-20 13:16:29,516::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: () Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain} Thread-32379::INFO::2013-09-20 13:16:29,523::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,523::task::1168::TaskManager.Task::(prepare) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state preparing -> state finished Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::974::TaskManager.Task::(_decref) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::ref 0 aborting False Thread-32382::DEBUG::2013-09-20 13:16:29,888::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32382::DEBUG::2013-09-20 13:16:29,888::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state init -> state preparing Thread-32382::INFO::2013-09-20 13:16:29,889::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', domainName='rep2-virt', typeSpecificArg='192.168.1.1:rep2-virt', domClass=1, domVersion='3', options=None) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:29,904::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-32382::DEBUG::2013-09-20 13:16:29,904::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::DEBUG::2013-09-20 13:16:32,931::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,255::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,258::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,259::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 17d21ac7-5859-4f25-8de7-2a9433d50c11' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,285::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found\n'; <rc> = 5 Thread-32382::WARNING::2013-09-20 13:16:33,286::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found'] Thread-32382::DEBUG::2013-09-20 13:16:33,286::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-32382::ERROR::2013-09-20 13:16:33,295::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('17d21ac7-5859-4f25-8de7-2a9433d50c11',) Thread-32382::INFO::2013-09-20 13:16:33,295::nfsSD::69::Storage.StorageDomain::(create) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 domainName=rep2-virt remotePath=192.168.1.1:rep2-virt domClass=1 Thread-32382::DEBUG::2013-09-20 13:16:33,430::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,445::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[] Thread-32382::WARNING::2013-09-20 13:16:33,446::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is Thread-32382::DEBUG::2013-09-20 13:16:33,447::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32382::DEBUG::2013-09-20 13:16:33,448::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32382::DEBUG::2013-09-20 13:16:33,449::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,454::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32382::DEBUG::2013-09-20 13:16:33,457::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/192.168.1.1:rep2-virt/17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,457::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,469::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,472::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-32382::DEBUG::2013-09-20 13:16:33,472::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_imageNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_volumeNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::clusterlock::137::initSANLock::(initSANLock) Initializing SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32387::DEBUG::2013-09-20 13:16:33,717::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state init -> state preparing Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::1168::TaskManager.Task::(prepare) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::finished: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state preparing -> state finished Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32387::DEBUG::2013-09-20 13:16:33,719::task::974::TaskManager.Task::(_decref) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::ref 0 aborting False Thread-32382::ERROR::2013-09-20 13:16:34,126::clusterlock::145::initSANLock::(initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING::2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-32382::DEBUG::2013-09-20 13:16:34,127::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32382::INFO::2013-09-20 13:16:34,128::logUtils::47::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::1168::TaskManager.Task::(prepare) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::finished: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state preparing -> state finished Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32382::DEBUG::2013-09-20 13:16:34,129::task::974::TaskManager.Task::(_decref) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::ref 0 aborting False Thread-32389::DEBUG::2013-09-20 13:16:34,219::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32389::DEBUG::2013-09-20 13:16:34,219::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state init -> state preparing Thread-32389::INFO::2013-09-20 13:16:34,220::logUtils::44::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', options=None) Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at 'getStorageDomainStats' Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'shared' Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'shared' (1 active user) Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Granted request Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::811::TaskManager.Task::(resourceAcquired) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (shared) Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 1 aborting False Thread-32389::INFO::2013-09-20 13:16:34,223::logUtils::47::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::1168::TaskManager.Task::(prepare) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state preparing -> state finished Thread-32389::DEBUG::2013-09-20 13:16:34,223::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32389::DEBUG::2013-09-20 13:16:34,225::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 0 aborting False Thread-32390::DEBUG::2013-09-20 13:16:35,099::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32390::DEBUG::2013-09-20 13:16:35,099::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state init -> state preparing Thread-32390::INFO::2013-09-20 13:16:35,099::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}], options=None) Thread-32390::DEBUG::2013-09-20 13:16:35,105::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32390::DEBUG::2013-09-20 13:16:35,112::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', '17d21ac7-5859-4f25-8de7-2a9433d50c11') Thread-32390::DEBUG::2013-09-20 13:16:35,113::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32390::INFO::2013-09-20 13:16:35,113::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::1168::TaskManager.Task::(prepare) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::finished: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state preparing -> state finished Thread-32390::DEBUG::2013-09-20 13:16:35,113::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::task::974::TaskManager.Task::(_decref) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::ref 0 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,148::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32393::DEBUG::2013-09-20 13:16:35,148::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state init -> state preparing Thread-32393::INFO::2013-09-20 13:16:35,148::logUtils::44::dispatcher::(wrapper) Run and protect: createStoragePool(poolType=None, spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default', masterDom='17d21ac7-5859-4f25-8de7-2a9433d50c11', domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'], masterVersion=9, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-32393::INFO::2013-09-20 13:16:35,149::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,161::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '954' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,163::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '956' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::INFO::2013-09-20 13:16:35,166::sp::592::Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default master_sd=17d21ac7-5859-4f25-8de7-2a9433d50c11 domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'] masterVersion=9 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-32393::INFO::2013-09-20 13:16:35,166::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,177::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,188::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::WARNING::2013-09-20 13:16:35,189::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=61b814a65ea3ede1f0ae1d58e139adc06bf9eda5'] Thread-32393::DEBUG::2013-09-20 13:16:35,194::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32393::INFO::2013-09-20 13:16:35,194::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 (id: 250) Thread-32393::ERROR::2013-09-20 13:16:36,196::task::850::TaskManager.Task::(_setError) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId(self.id <http://self.id>) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-32393::DEBUG::2013-09-20 13:16:36,196::task::869::TaskManager.Task::(_run) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._run: 72348d40-8442-4dbf-bc66-1d354da5fc31 (None, '5849b030-626e-47cb-ad90-3ce782d831b3', 'Default', '17d21ac7-5859-4f25-8de7-2a9433d50c11', ['17d21ac7-5859-4f25-8de7-2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed - stopping task Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1194::TaskManager.Task::(stop) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::stopping in state preparing (force False) Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting True Thread-32393::INFO::2013-09-20 13:16:36,197::task::1151::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1156::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Prepare: aborted: Cannot acquire host id Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 0 aborting True Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::909::TaskManager.Task::(_doAbort) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._doAbort: force False Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state preparing -> state aborting Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::534::TaskManager.Task::(__state_aborting) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_aborting: recover policy none Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state aborting -> state failed Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>, 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-32393::ERROR::2013-09-20 13:16:36,200::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-32398::DEBUG::2013-09-20 13:16:48,921::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state init -> state preparing Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::1168::TaskManager.Task::(prepare) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::finished: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state preparing -> state finished
*Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 ex.250 877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook <https://www.facebook.com/miovision>* ------------------------------------------------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty <deepakcs@linux.vnet.ibm.com <mailto:deepakcs@linux.vnet.ibm.com>> wrote:
Either you can use the volume set .. option as mentioned in the wikipage --or -- If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and click on "Optimize for virt. store". That should also set the volume options in addition to few other things
thanx, deepak
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
was this resolved?

Hi Itamar, still an issue for me. *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Wed, Oct 2, 2013 at 3:23 PM, Itamar Heim <iheim@redhat.com> wrote:
On 09/20/2013 08:30 PM, Steve Dainard wrote:
Awesome, thanks guys. Its weird that that article tells you to set with 'key=value' rather than 'key value' must be some legacy stuff.
Once those changes are in place I hit a different error. Deepak, maybe you've seen this one on new storage domain add:
*[root@ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.**log *
2013-09-20 13:16:36,226 ERROR [org.ovirt.engine.core.**vdsbroker.vdsbroker.** CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-**2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-09-20 13:16:36,229 INFO [org.ovirt.engine.core.**vdsbroker.vdsbroker.**CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id: 672635cc 2013-09-20 13:16:36,231 ERROR [org.ovirt.engine.core.bll.**storage.**AddStoragePoolWithStoragesComm** and] (ajp--127.0.0.1-8702-9) Command org.ovirt.engine.core.bll.**storage.**AddStoragePoolWithStoragesComm**and throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.**vdsbroker.vdsbroker.**VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-** 2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661) 2013-09-20 13:16:36,296 INFO [org.ovirt.engine.core.dal.**dbbroker.auditloghandling.** AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domains to Data Center Default. (User: admin@internal) 2013-09-20 13:16:36,299 INFO [org.ovirt.engine.core.bll.**storage.**AddStoragePoolWithStoragesComm** and] (ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks= key: 5849b030-626e-47cb-ad90-**3ce782d831b3 value: POOL , sharedLocks= ] 2013-09-20 13:16:36,387 INFO [org.ovirt.engine.core.bll.**storage.**AttachStorageDomainToPoolComma** nd] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-**c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.**businessentities.StoragePool; snapshot: id=5849b030-626e-47cb-ad90-**3ce782d831b3. 2013-09-20 13:16:36,398 INFO [org.ovirt.engine.core.bll.**storage.**AttachStorageDomainToPoolComma** nd] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-**c01f2221204e]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.**businessentities.**StoragePoolIsoMap; snapshot: storagePoolId = 5849b030-626e-47cb-ad90-**3ce782d831b3, storageId = 17d21ac7-5859-4f25-8de7-**2a9433d50c11. 2013-09-20 13:16:36,425 INFO [org.ovirt.engine.core.bll.**storage.**AttachStorageDomainToPoolComma** nd] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-**c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.**businessentities.**StorageDomainStatic; snapshot: id=17d21ac7-5859-4f25-8de7-**2a9433d50c11. 2013-09-20 13:16:36,464 INFO [org.ovirt.engine.core.dal.**dbbroker.auditloghandling.** AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID: 014ec59b-e6d7-4e5e-b588-**4fb0dfa8f1c8, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domain rep2-virt to Data Center Default. (User: admin@internal)
*[root@ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log *
Thread-32374::DEBUG::2013-09-**20 13:16:18,107::task::579::**TaskManager.Task::(_**updateState) Task=`f4cab975-d5fa-463a-990e-**ab32686c6806`::moving from state init -> state preparing Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::44::**dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::47::**dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32374::DEBUG::2013-09-**20 13:16:18,108::task::1168::**TaskManager.Task::(prepare) Task=`f4cab975-d5fa-463a-990e-**ab32686c6806`::finished: {} Thread-32374::DEBUG::2013-09-**20 13:16:18,108::task::579::**TaskManager.Task::(_**updateState) Task=`f4cab975-d5fa-463a-990e-**ab32686c6806`::moving from state preparing -> state finished Thread-32374::DEBUG::2013-09-**20 13:16:18,108::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-32374::DEBUG::2013-09-**20 13:16:18,108::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32374::DEBUG::2013-09-**20 13:16:18,108::task::974::**TaskManager.Task::(_decref) Task=`f4cab975-d5fa-463a-990e-**ab32686c6806`::ref 0 aborting False Thread-32379::DEBUG::2013-09-**20 13:16:29,509::BindingXMLRPC::**177::vds::(wrapper) client [10.0.0.34] Thread-32379::DEBUG::2013-09-**20 13:16:29,510::task::579::**TaskManager.Task::(_**updateState) Task=`1ad55ba1-afaa-4524-b3fa-**3d55a421e8bc`::moving from state init -> state preparing Thread-32379::INFO::2013-09-20 13:16:29,510::logUtils::44::**dispatcher::(wrapper) Run and protect: connectStorageServer(domType=**7, spUUID='00000000-0000-0000-**0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-**000000000000'}], options=None) Thread-32379::DEBUG::2013-09-**20 13:16:29,516::hsm::2333::**Storage.HSM::(__**prefetchDomains) glusterDomPath: glusterSD/* Thread-32379::DEBUG::2013-09-**20 13:16:29,523::hsm::2345::**Storage.HSM::(__**prefetchDomains) Found SD uuids: () Thread-32379::DEBUG::2013-09-**20 13:16:29,523::hsm::2396::**Storage.HSM::(**connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-**6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-**1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-**9f08fa16869c: storage.glusterSD.findDomain} Thread-32379::INFO::2013-09-20 13:16:29,523::logUtils::47::**dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-**000000000000'}]} Thread-32379::DEBUG::2013-09-**20 13:16:29,523::task::1168::**TaskManager.Task::(prepare) Task=`1ad55ba1-afaa-4524-b3fa-**3d55a421e8bc`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-**000000000000'}]} Thread-32379::DEBUG::2013-09-**20 13:16:29,524::task::579::**TaskManager.Task::(_**updateState) Task=`1ad55ba1-afaa-4524-b3fa-**3d55a421e8bc`::moving from state preparing -> state finished Thread-32379::DEBUG::2013-09-**20 13:16:29,524::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-32379::DEBUG::2013-09-**20 13:16:29,524::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32379::DEBUG::2013-09-**20 13:16:29,524::task::974::**TaskManager.Task::(_decref) Task=`1ad55ba1-afaa-4524-b3fa-**3d55a421e8bc`::ref 0 aborting False Thread-32382::DEBUG::2013-09-**20 13:16:29,888::BindingXMLRPC::**177::vds::(wrapper) client [10.0.0.34] Thread-32382::DEBUG::2013-09-**20 13:16:29,888::task::579::**TaskManager.Task::(_**updateState) Task=`a3ba925b-65ee-42a6-8506-**927a06f63995`::moving from state init -> state preparing Thread-32382::INFO::2013-09-20 13:16:29,889::logUtils::44::**dispatcher::(wrapper) Run and protect: createStorageDomain(**storageType=7, sdUUID='17d21ac7-5859-4f25-**8de7-2a9433d50c11', domainName='rep2-virt', typeSpecificArg='192.168.1.1:r**ep2-virt', domClass=1, domVersion='3', options=None) Thread-32382::DEBUG::2013-09-**20 13:16:29,889::misc::807::**SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-32382::DEBUG::2013-09-**20 13:16:29,889::misc::809::**SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-**20 13:16:29,889::misc::807::**SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-32382::DEBUG::2013-09-**20 13:16:29,889::misc::809::**SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-**20 13:16:29,889::iscsiadm::91::**Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-32382::DEBUG::2013-09-**20 13:16:29,904::iscsiadm::91::**Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-32382::DEBUG::2013-09-**20 13:16:29,904::misc::817::**SamplingMethod::(__call__) Returning last result Thread-32382::DEBUG::2013-09-**20 13:16:32,931::multipath::111::**Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-32382::DEBUG::2013-09-**20 13:16:33,255::multipath::111::**Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-32382::DEBUG::2013-09-**20 13:16:33,256::lvm::483::**OperationMutex::(_**invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,256::lvm::485::**OperationMutex::(_**invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,256::lvm::494::**OperationMutex::(_**invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,256::lvm::496::**OperationMutex::(_**invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,257::lvm::514::**OperationMutex::(_**invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,257::lvm::516::**OperationMutex::(_**invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,257::misc::817::**SamplingMethod::(__call__) Returning last result Thread-32382::ERROR::2013-09-**20 13:16:33,257::sdc::137::**Storage.StorageDomainCache::(_**findDomain) looking for unfetched domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 Thread-32382::ERROR::2013-09-**20 13:16:33,257::sdc::154::**Storage.StorageDomainCache::(_** findUnfetchedDomain) looking for domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 Thread-32382::DEBUG::2013-09-**20 13:16:33,258::lvm::374::**OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-32382::DEBUG::2013-09-**20 13:16:33,259::lvm::311::**Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,**extent_size,extent_count,free_** count,tags,vg_mda_size,vg_mda_**free 17d21ac7-5859-4f25-8de7-**2a9433d50c11' (cwd None) Thread-32382::DEBUG::2013-09-**20 13:16:33,285::lvm::311::**Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "17d21ac7-5859-4f25-8de7-**2a9433d50c11" not found\n'; <rc> = 5 Thread-32382::WARNING::2013-**09-20 13:16:33,286::lvm::379::**Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "17d21ac7-5859-4f25-8de7-**2a9433d50c11" not found'] Thread-32382::DEBUG::2013-09-**20 13:16:33,286::lvm::403::**OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-32382::ERROR::2013-09-**20 13:16:33,295::sdc::143::**Storage.StorageDomainCache::(_**findDomain) domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.**py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.**py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(**sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('17d21ac7-5859-4f25-8de7-**2a9433d50c11',) Thread-32382::INFO::2013-09-20 13:16:33,295::nfsSD::69::**Storage.StorageDomain::(**create) sdUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11 domainName=rep2-virt remotePath=192.168.1.1:rep2-**virt domClass=1 Thread-32382::DEBUG::2013-09-**20 13:16:33,430::persistentDict::**192::Storage.PersistentDict::(**__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-**20 13:16:33,445::persistentDict::**234::Storage.PersistentDict::(**refresh) read lines (FileMetadataRW)=[] Thread-32382::WARNING::2013-**09-20 13:16:33,446::persistentDict::**256::Storage.PersistentDict::(**refresh) data has no embedded checksum - trust it as it is Thread-32382::DEBUG::2013-09-**20 13:16:33,447::persistentDict::**167::Storage.PersistentDict::(** transaction) Starting transaction Thread-32382::DEBUG::2013-09-**20 13:16:33,448::persistentDict::**173::Storage.PersistentDict::(** transaction) Flushing changes Thread-32382::DEBUG::2013-09-**20 13:16:33,449::persistentDict::**299::Storage.PersistentDict::(**flush) about to write lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**2b07fbc8c65e20eef5180ab785016b**de543c6746'] Thread-32382::DEBUG::2013-09-**20 13:16:33,454::persistentDict::**175::Storage.PersistentDict::(** transaction) Finished transaction Thread-32382::DEBUG::2013-09-**20 13:16:33,457::fileSD::153::**Storage.StorageDomain::(__**init__) Reading domain in path /rhev/data-center/mnt/**glusterSD/192.168.1.1:rep2-** virt/17d21ac7-5859-4f25-8de7-**2a9433d50c11 Thread-32382::DEBUG::2013-09-**20 13:16:33,457::persistentDict::**192::Storage.PersistentDict::(**__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-**20 13:16:33,469::persistentDict::**234::Storage.PersistentDict::(**refresh) read lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**2b07fbc8c65e20eef5180ab785016b**de543c6746'] Thread-32382::DEBUG::2013-09-**20 13:16:33,472::fileSD::535::**Storage.StorageDomain::(**imageGarbageCollector) Removing remnants of deleted images [] Thread-32382::DEBUG::2013-09-**20 13:16:33,472::resourceManager:**:420::ResourceManager::(** registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-**2a9433d50c11_imageNS' Thread-32382::DEBUG::2013-09-**20 13:16:33,473::resourceManager:**:420::ResourceManager::(** registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-**2a9433d50c11_volumeNS' Thread-32382::DEBUG::2013-09-**20 13:16:33,473::clusterlock::**137::initSANLock::(**initSANLock) Initializing SANLock for domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 Thread-32387::DEBUG::2013-09-**20 13:16:33,717::task::579::**TaskManager.Task::(_**updateState) Task=`0e11c8e5-4e40-4d28-9eaf-**129db67b2f4d`::moving from state init -> state preparing Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::44::**dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::47::**dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32387::DEBUG::2013-09-**20 13:16:33,718::task::1168::**TaskManager.Task::(prepare) Task=`0e11c8e5-4e40-4d28-9eaf-**129db67b2f4d`::finished: {} Thread-32387::DEBUG::2013-09-**20 13:16:33,718::task::579::**TaskManager.Task::(_**updateState) Task=`0e11c8e5-4e40-4d28-9eaf-**129db67b2f4d`::moving from state preparing -> state finished Thread-32387::DEBUG::2013-09-**20 13:16:33,718::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-32387::DEBUG::2013-09-**20 13:16:33,718::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32387::DEBUG::2013-09-**20 13:16:33,719::task::974::**TaskManager.Task::(_decref) Task=`0e11c8e5-4e40-4d28-9eaf-**129db67b2f4d`::ref 0 aborting False Thread-32382::ERROR::2013-09-**20 13:16:34,126::clusterlock::**145::initSANLock::(**initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/**clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING::2013-**09-20 13:16:34,127::sd::428::**Storage.StorageDomain::(**initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.**py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/**clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/**clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-32382::DEBUG::2013-09-**20 13:16:34,127::hsm::2624::**Storage.HSM::(**createStorageDomain) knownSDs: {b72b61d1-e11c-496d-ad3a-**6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-**1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-**9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-**2a9433d50c11: storage.glusterSD.findDomain} Thread-32382::INFO::2013-09-20 13:16:34,128::logUtils::47::**dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-32382::DEBUG::2013-09-**20 13:16:34,128::task::1168::**TaskManager.Task::(prepare) Task=`a3ba925b-65ee-42a6-8506-**927a06f63995`::finished: None Thread-32382::DEBUG::2013-09-**20 13:16:34,128::task::579::**TaskManager.Task::(_**updateState) Task=`a3ba925b-65ee-42a6-8506-**927a06f63995`::moving from state preparing -> state finished Thread-32382::DEBUG::2013-09-**20 13:16:34,128::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-32382::DEBUG::2013-09-**20 13:16:34,128::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32382::DEBUG::2013-09-**20 13:16:34,129::task::974::**TaskManager.Task::(_decref) Task=`a3ba925b-65ee-42a6-8506-**927a06f63995`::ref 0 aborting False Thread-32389::DEBUG::2013-09-**20 13:16:34,219::BindingXMLRPC::**177::vds::(wrapper) client [10.0.0.34] Thread-32389::DEBUG::2013-09-**20 13:16:34,219::task::579::**TaskManager.Task::(_**updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::moving from state init -> state preparing Thread-32389::INFO::2013-09-20 13:16:34,220::logUtils::44::**dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='**17d21ac7-5859-4f25-8de7-**2a9433d50c11', options=None) Thread-32389::DEBUG::2013-09-**20 13:16:34,220::resourceManager:**:197::ResourceManager.Request:** :(__init__) ResName=`Storage.17d21ac7-**5859-4f25-8de7-2a9433d50c11`** ReqID=`9f37d808-9ad2-4c06-**99ef-449b43049e80`::Request was made in '/usr/share/vdsm/storage/hsm.**py' line '2784' at 'getStorageDomainStats' Thread-32389::DEBUG::2013-09-**20 13:16:34,220::resourceManager:**:541::ResourceManager::(** registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' for lock type 'shared' Thread-32389::DEBUG::2013-09-**20 13:16:34,221::resourceManager:**:600::ResourceManager::(** registerResource) Resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' is free. Now locking as 'shared' (1 active user) Thread-32389::DEBUG::2013-09-**20 13:16:34,221::resourceManager:**:237::ResourceManager.Request:**:(grant) ResName=`Storage.17d21ac7-**5859-4f25-8de7-2a9433d50c11`** ReqID=`9f37d808-9ad2-4c06-**99ef-449b43049e80`::Granted request Thread-32389::DEBUG::2013-09-**20 13:16:34,221::task::811::**TaskManager.Task::(**resourceAcquired) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::_**resourcesAcquired: Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11 (shared) Thread-32389::DEBUG::2013-09-**20 13:16:34,221::task::974::**TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::ref 1 aborting False Thread-32389::INFO::2013-09-20 13:16:34,223::logUtils::47::**dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-**20 13:16:34,223::task::1168::**TaskManager.Task::(prepare) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-**20 13:16:34,223::task::579::**TaskManager.Task::(_**updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::moving from state preparing -> state finished Thread-32389::DEBUG::2013-09-**20 13:16:34,223::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11', isValid: 'True' obj: 'None'>} Thread-32389::DEBUG::2013-09-**20 13:16:34,224::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32389::DEBUG::2013-09-**20 13:16:34,224::resourceManager:**:615::ResourceManager::(** releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-** 8de7-2a9433d50c11' Thread-32389::DEBUG::2013-09-**20 13:16:34,224::resourceManager:**:634::ResourceManager::(** releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' (0 active users) Thread-32389::DEBUG::2013-09-**20 13:16:34,224::resourceManager:**:640::ResourceManager::(** releaseResource) Resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32389::DEBUG::2013-09-**20 13:16:34,224::resourceManager:**:648::ResourceManager::(** releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11', Clearing records. Thread-32389::DEBUG::2013-09-**20 13:16:34,225::task::974::**TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-**cf1e2204f586`::ref 0 aborting False Thread-32390::DEBUG::2013-09-**20 13:16:35,099::BindingXMLRPC::**177::vds::(wrapper) client [10.0.0.34] Thread-32390::DEBUG::2013-09-**20 13:16:35,099::task::579::**TaskManager.Task::(_**updateState) Task=`089c5f71-9cbb-4626-8f2d-**3ed3547a98cd`::moving from state init -> state preparing Thread-32390::INFO::2013-09-20 13:16:35,099::logUtils::44::**dispatcher::(wrapper) Run and protect: connectStorageServer(domType=**7, spUUID='00000000-0000-0000-**0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'cecee482-87e1-4ecc-8bda-**0e0ec84d7792'}], options=None) Thread-32390::DEBUG::2013-09-**20 13:16:35,105::hsm::2333::**Storage.HSM::(__**prefetchDomains) glusterDomPath: glusterSD/* Thread-32390::DEBUG::2013-09-**20 13:16:35,112::hsm::2345::**Storage.HSM::(__**prefetchDomains) Found SD uuids: ('17d21ac7-5859-4f25-8de7-**2a9433d50c11', '17d21ac7-5859-4f25-8de7-**2a9433d50c11') Thread-32390::DEBUG::2013-09-**20 13:16:35,113::hsm::2396::**Storage.HSM::(**connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-**6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-**1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-**9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-**2a9433d50c11: storage.glusterSD.findDomain} Thread-32390::INFO::2013-09-20 13:16:35,113::logUtils::47::**dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-**0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-**20 13:16:35,113::task::1168::**TaskManager.Task::(prepare) Task=`089c5f71-9cbb-4626-8f2d-**3ed3547a98cd`::finished: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-**0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-**20 13:16:35,113::task::579::**TaskManager.Task::(_**updateState) Task=`089c5f71-9cbb-4626-8f2d-**3ed3547a98cd`::moving from state preparing -> state finished Thread-32390::DEBUG::2013-09-**20 13:16:35,113::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-32390::DEBUG::2013-09-**20 13:16:35,114::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32390::DEBUG::2013-09-**20 13:16:35,114::task::974::**TaskManager.Task::(_decref) Task=`089c5f71-9cbb-4626-8f2d-**3ed3547a98cd`::ref 0 aborting False Thread-32393::DEBUG::2013-09-**20 13:16:35,148::BindingXMLRPC::**177::vds::(wrapper) client [10.0.0.34] Thread-32393::DEBUG::2013-09-**20 13:16:35,148::task::579::**TaskManager.Task::(_**updateState) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::moving from state init -> state preparing Thread-32393::INFO::2013-09-20 13:16:35,148::logUtils::44::**dispatcher::(wrapper) Run and protect: createStoragePool(poolType=**None, spUUID='5849b030-626e-47cb-**ad90-3ce782d831b3', poolName='Default', masterDom='17d21ac7-5859-4f25-**8de7-2a9433d50c11', domList=['17d21ac7-5859-4f25-**8de7-2a9433d50c11'], masterVersion=9, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-32393::INFO::2013-09-20 13:16:35,149::fileSD::315::**Storage.StorageDomain::(**validate) sdUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-**20 13:16:35,161::persistentDict::**234::Storage.PersistentDict::(**refresh) read lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**2b07fbc8c65e20eef5180ab785016b**de543c6746'] Thread-32393::DEBUG::2013-09-**20 13:16:35,162::resourceManager:**:197::ResourceManager.Request:** :(__init__) ResName=`Storage.5849b030-**626e-47cb-ad90-3ce782d831b3`** ReqID=`31060ad0-6633-4bbf-**a859-b3f0c27af760`::Request was made in '/usr/share/vdsm/storage/hsm.**py' line '954' at 'createStoragePool' Thread-32393::DEBUG::2013-09-**20 13:16:35,162::resourceManager:**:541::ResourceManager::(** registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-**20 13:16:35,162::resourceManager:**:600::ResourceManager::(** registerResource) Resource 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-**20 13:16:35,162::resourceManager:**:237::ResourceManager.Request:**:(grant) ResName=`Storage.5849b030-**626e-47cb-ad90-3ce782d831b3`** ReqID=`31060ad0-6633-4bbf-**a859-b3f0c27af760`::Granted request Thread-32393::DEBUG::2013-09-**20 13:16:35,163::task::811::**TaskManager.Task::(**resourceAcquired) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::_**resourcesAcquired: Storage.5849b030-626e-47cb-**ad90-3ce782d831b3 (exclusive) Thread-32393::DEBUG::2013-09-**20 13:16:35,163::task::974::**TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::ref 1 aborting False Thread-32393::DEBUG::2013-09-**20 13:16:35,163::resourceManager:**:197::ResourceManager.Request:** :(__init__) ResName=`Storage.17d21ac7-**5859-4f25-8de7-2a9433d50c11`** ReqID=`db5f52a0-d455-419c-**b8a5-86fc6b695571`::Request was made in '/usr/share/vdsm/storage/hsm.**py' line '956' at 'createStoragePool' Thread-32393::DEBUG::2013-09-**20 13:16:35,164::resourceManager:**:541::ResourceManager::(** registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-**20 13:16:35,164::resourceManager:**:600::ResourceManager::(** registerResource) Resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-**20 13:16:35,164::resourceManager:**:237::ResourceManager.Request:**:(grant) ResName=`Storage.17d21ac7-**5859-4f25-8de7-2a9433d50c11`** ReqID=`db5f52a0-d455-419c-**b8a5-86fc6b695571`::Granted request Thread-32393::DEBUG::2013-09-**20 13:16:35,165::task::811::**TaskManager.Task::(**resourceAcquired) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::_**resourcesAcquired: Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11 (exclusive) Thread-32393::DEBUG::2013-09-**20 13:16:35,165::task::974::**TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::ref 1 aborting False Thread-32393::INFO::2013-09-20 13:16:35,166::sp::592::**Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-**ad90-3ce782d831b3 poolName=Default master_sd=17d21ac7-5859-4f25-**8de7-2a9433d50c11 domList=['17d21ac7-5859-4f25-**8de7-2a9433d50c11'] masterVersion=9 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-32393::INFO::2013-09-20 13:16:35,166::fileSD::315::**Storage.StorageDomain::(**validate) sdUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-**20 13:16:35,177::persistentDict::**234::Storage.PersistentDict::(**refresh) read lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**2b07fbc8c65e20eef5180ab785016b**de543c6746'] Thread-32393::DEBUG::2013-09-**20 13:16:35,188::persistentDict::**234::Storage.PersistentDict::(**refresh) read lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**2b07fbc8c65e20eef5180ab785016b**de543c6746'] Thread-32393::WARNING::2013-**09-20 13:16:35,189::fileUtils::167::**Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-**626e-47cb-ad90-3ce782d831b3 already exists Thread-32393::DEBUG::2013-09-**20 13:16:35,189::persistentDict::**167::Storage.PersistentDict::(** transaction) Starting transaction Thread-32393::DEBUG::2013-09-**20 13:16:35,189::persistentDict::**173::Storage.PersistentDict::(** transaction) Flushing changes Thread-32393::DEBUG::2013-09-**20 13:16:35,189::persistentDict::**299::Storage.PersistentDict::(**flush) about to write lines (FileMetadataRW)=['CLASS=Data'**, 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-**virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-**8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=**61b814a65ea3ede1f0ae1d58e139ad**c06bf9eda5'] Thread-32393::DEBUG::2013-09-**20 13:16:35,194::persistentDict::**175::Storage.PersistentDict::(** transaction) Finished transaction Thread-32393::INFO::2013-09-20 13:16:35,194::clusterlock::**174::SANLock::(acquireHostId) Acquiring host id for domain 17d21ac7-5859-4f25-8de7-**2a9433d50c11 (id: 250) Thread-32393::ERROR::2013-09-**20 13:16:36,196::task::850::**TaskManager.Task::(_setError) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.**py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.**py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.**py", line 617, in create self._**acquireTemporaryClusterLock(**msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.**py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId(self.id <http://self.id>)
File "/usr/share/vdsm/storage/sd.**py", line 458, in acquireHostId self._clusterLock.**acquireHostId(hostId, async) File "/usr/share/vdsm/storage/**clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._**sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-**2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-32393::DEBUG::2013-09-**20 13:16:36,196::task::869::**TaskManager.Task::(_run) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::Task._run: 72348d40-8442-4dbf-bc66-**1d354da5fc31 (None, '5849b030-626e-47cb-ad90-**3ce782d831b3', 'Default', '17d21ac7-5859-4f25-8de7-**2a9433d50c11', ['17d21ac7-5859-4f25-8de7-**2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed - stopping task Thread-32393::DEBUG::2013-09-**20 13:16:36,197::task::1194::**TaskManager.Task::(stop) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::stopping in state preparing (force False) Thread-32393::DEBUG::2013-09-**20 13:16:36,197::task::974::**TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::ref 1 aborting True Thread-32393::INFO::2013-09-20 13:16:36,197::task::1151::**TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-32393::DEBUG::2013-09-**20 13:16:36,197::task::1156::**TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::Prepare: aborted: Cannot acquire host id Thread-32393::DEBUG::2013-09-**20 13:16:36,197::task::974::**TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::ref 0 aborting True Thread-32393::DEBUG::2013-09-**20 13:16:36,197::task::909::**TaskManager.Task::(_doAbort) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::Task._doAbort: force False Thread-32393::DEBUG::2013-09-**20 13:16:36,198::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-**20 13:16:36,198::task::579::**TaskManager.Task::(_**updateState) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::moving from state preparing -> state aborting Thread-32393::DEBUG::2013-09-**20 13:16:36,198::task::534::**TaskManager.Task::(__state_**aborting) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::_aborting: recover policy none Thread-32393::DEBUG::2013-09-**20 13:16:36,198::task::579::**TaskManager.Task::(_**updateState) Task=`72348d40-8442-4dbf-bc66-**1d354da5fc31`::moving from state aborting -> state failed Thread-32393::DEBUG::2013-09-**20 13:16:36,198::resourceManager:**:939::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11', isValid: 'True' obj: 'None'>, 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-32393::DEBUG::2013-09-**20 13:16:36,198::resourceManager:**:976::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-**20 13:16:36,199::resourceManager:**:615::ResourceManager::(** releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-** 8de7-2a9433d50c11' Thread-32393::DEBUG::2013-09-**20 13:16:36,199::resourceManager:**:634::ResourceManager::(** releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' (0 active users) Thread-32393::DEBUG::2013-09-**20 13:16:36,199::resourceManager:**:640::ResourceManager::(** releaseResource) Resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-**20 13:16:36,199::resourceManager:**:648::ResourceManager::(** releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-**8de7-2a9433d50c11', Clearing records. Thread-32393::DEBUG::2013-09-**20 13:16:36,199::resourceManager:**:615::ResourceManager::(** releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-** ad90-3ce782d831b3' Thread-32393::DEBUG::2013-09-**20 13:16:36,200::resourceManager:**:634::ResourceManager::(** releaseResource) Released resource 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3' (0 active users) Thread-32393::DEBUG::2013-09-**20 13:16:36,200::resourceManager:**:640::ResourceManager::(** releaseResource) Resource 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-**20 13:16:36,200::resourceManager:**:648::ResourceManager::(** releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-**ad90-3ce782d831b3', Clearing records. Thread-32393::ERROR::2013-09-**20 13:16:36,200::dispatcher::67::**Storage.Dispatcher.Protect::(**run) {'status': {'message': "Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-**2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-32398::DEBUG::2013-09-**20 13:16:48,921::task::579::**TaskManager.Task::(_**updateState) Task=`a5bce432-622b-499b-a216-**d9a1f876e3ca`::moving from state init -> state preparing Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::44::**dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::47::**dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32398::DEBUG::2013-09-**20 13:16:48,922::task::1168::**TaskManager.Task::(prepare) Task=`a5bce432-622b-499b-a216-**d9a1f876e3ca`::finished: {} Thread-32398::DEBUG::2013-09-**20 13:16:48,922::task::579::**TaskManager.Task::(_**updateState) Task=`a5bce432-622b-499b-a216-**d9a1f876e3ca`::moving from state preparing -> state finished
*Steve Dainard * Infrastructure Manager Miovision <http://miovision.com/> | /Rethink Traffic/ 519-513-2407 ex.250 877-646-8476 (toll-free)
*Blog <http://miovision.com/blog> | **LinkedIn <https://www.linkedin.com/**company/miovision-technologies<https://www.linkedin.com/company/miovision-technologies> **> | Twitter <https://twitter.com/miovision**> | Facebook <https://www.facebook.com/**miovision<https://www.facebook.com/miovision>
* ------------------------------**------------------------------**
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty <deepakcs@linux.vnet.ibm.com <mailto:deepakcs@linux.vnet.**ibm.com<deepakcs@linux.vnet.ibm.com>>> wrote:
Either you can use the volume set .. option as mentioned in the wikipage --or -- If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and click on "Optimize for virt. store". That should also set the volume options in addition to few other things
thanx, deepak
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
was this resolved?

The root cause seems to be a SANlock issue: Thread-32382::ERROR:: 2013-09-20 13:16:34,126::clusterlock::145::initSANLock: initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING:: 2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain: initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Can you include /var/log/sanlock.log and /var/log/messages please? ----- Original Message -----
From: "Steve Dainard" <sdainard@miovision.com> To: "Deepak C Shetty" <deepakcs@linux.vnet.ibm.com> Cc: "users" <users@ovirt.org> Sent: Friday, September 20, 2013 8:30:59 PM Subject: Re: [Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error
Awesome, thanks guys. Its weird that that article tells you to set with 'key=value' rather than 'key value' must be some legacy stuff.
Once those changes are in place I hit a different error. Deepak, maybe you've seen this one on new storage domain add:
[root@ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.log 2013-09-20 13:16:36,226 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-09-20 13:16:36,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id: 672635cc 2013-09-20 13:16:36,231 ERROR [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Command org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661) 2013-09-20 13:16:36,296 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domains to Data Center Default. (User: admin@internal) 2013-09-20 13:16:36,299 INFO [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks= key: 5849b030-626e-47cb-ad90-3ce782d831b3 value: POOL , sharedLocks= ] 2013-09-20 13:16:36,387 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StoragePool; snapshot: id=5849b030-626e-47cb-ad90-3ce782d831b3. 2013-09-20 13:16:36,398 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId = 17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,425 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: id=17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,464 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID: 014ec59b-e6d7-4e5e-b588-4fb0dfa8f1c8, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domain rep2-virt to Data Center Default. (User: admin@internal)
[root@ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log Thread-32374::DEBUG::2013-09-20 13:16:18,107::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state init -> state preparing Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::1168::TaskManager.Task::(prepare) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::finished: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state preparing -> state finished Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::974::TaskManager.Task::(_decref) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::ref 0 aborting False Thread-32379::DEBUG::2013-09-20 13:16:29,509::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32379::DEBUG::2013-09-20 13:16:29,510::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state init -> state preparing Thread-32379::INFO::2013-09-20 13:16:29,510::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-32379::DEBUG::2013-09-20 13:16:29,516::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: () Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain} Thread-32379::INFO::2013-09-20 13:16:29,523::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,523::task::1168::TaskManager.Task::(prepare) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state preparing -> state finished Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::974::TaskManager.Task::(_decref) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::ref 0 aborting False Thread-32382::DEBUG::2013-09-20 13:16:29,888::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32382::DEBUG::2013-09-20 13:16:29,888::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state init -> state preparing Thread-32382::INFO::2013-09-20 13:16:29,889::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', domainName='rep2-virt', typeSpecificArg='192.168.1.1:rep2-virt', domClass=1, domVersion='3', options=None) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:29,904::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-32382::DEBUG::2013-09-20 13:16:29,904::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::DEBUG::2013-09-20 13:16:32,931::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,255::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,258::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,259::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 17d21ac7-5859-4f25-8de7-2a9433d50c11' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,285::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found\n'; <rc> = 5 Thread-32382::WARNING::2013-09-20 13:16:33,286::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found'] Thread-32382::DEBUG::2013-09-20 13:16:33,286::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-32382::ERROR::2013-09-20 13:16:33,295::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('17d21ac7-5859-4f25-8de7-2a9433d50c11',) Thread-32382::INFO::2013-09-20 13:16:33,295::nfsSD::69::Storage.StorageDomain::(create) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 domainName=rep2-virt remotePath=192.168.1.1:rep2-virt domClass=1 Thread-32382::DEBUG::2013-09-20 13:16:33,430::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,445::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[] Thread-32382::WARNING::2013-09-20 13:16:33,446::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is Thread-32382::DEBUG::2013-09-20 13:16:33,447::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32382::DEBUG::2013-09-20 13:16:33,448::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32382::DEBUG::2013-09-20 13:16:33,449::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,454::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32382::DEBUG::2013-09-20 13:16:33,457::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/192.168.1.1:rep2-virt/17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,457::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,469::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,472::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-32382::DEBUG::2013-09-20 13:16:33,472::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_imageNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_volumeNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::clusterlock::137::initSANLock::(initSANLock) Initializing SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32387::DEBUG::2013-09-20 13:16:33,717::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state init -> state preparing Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::1168::TaskManager.Task::(prepare) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::finished: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state preparing -> state finished Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32387::DEBUG::2013-09-20 13:16:33,719::task::974::TaskManager.Task::(_decref) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::ref 0 aborting False Thread-32382::ERROR::2013-09-20 13:16:34,126::clusterlock::145::initSANLock::(initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING::2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-32382::DEBUG::2013-09-20 13:16:34,127::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32382::INFO::2013-09-20 13:16:34,128::logUtils::47::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::1168::TaskManager.Task::(prepare) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::finished: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state preparing -> state finished Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32382::DEBUG::2013-09-20 13:16:34,129::task::974::TaskManager.Task::(_decref) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::ref 0 aborting False Thread-32389::DEBUG::2013-09-20 13:16:34,219::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32389::DEBUG::2013-09-20 13:16:34,219::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state init -> state preparing Thread-32389::INFO::2013-09-20 13:16:34,220::logUtils::44::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', options=None) Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at 'getStorageDomainStats' Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'shared' Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'shared' (1 active user) Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Granted request Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::811::TaskManager.Task::(resourceAcquired) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (shared) Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 1 aborting False Thread-32389::INFO::2013-09-20 13:16:34,223::logUtils::47::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::1168::TaskManager.Task::(prepare) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state preparing -> state finished Thread-32389::DEBUG::2013-09-20 13:16:34,223::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32389::DEBUG::2013-09-20 13:16:34,225::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 0 aborting False Thread-32390::DEBUG::2013-09-20 13:16:35,099::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32390::DEBUG::2013-09-20 13:16:35,099::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state init -> state preparing Thread-32390::INFO::2013-09-20 13:16:35,099::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}], options=None) Thread-32390::DEBUG::2013-09-20 13:16:35,105::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32390::DEBUG::2013-09-20 13:16:35,112::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', '17d21ac7-5859-4f25-8de7-2a9433d50c11') Thread-32390::DEBUG::2013-09-20 13:16:35,113::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32390::INFO::2013-09-20 13:16:35,113::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::1168::TaskManager.Task::(prepare) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::finished: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state preparing -> state finished Thread-32390::DEBUG::2013-09-20 13:16:35,113::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::task::974::TaskManager.Task::(_decref) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::ref 0 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,148::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32393::DEBUG::2013-09-20 13:16:35,148::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state init -> state preparing Thread-32393::INFO::2013-09-20 13:16:35,148::logUtils::44::dispatcher::(wrapper) Run and protect: createStoragePool(poolType=None, spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default', masterDom='17d21ac7-5859-4f25-8de7-2a9433d50c11', domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'], masterVersion=9, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-32393::INFO::2013-09-20 13:16:35,149::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,161::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '954' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,163::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '956' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Granted request Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::INFO::2013-09-20 13:16:35,166::sp::592::Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default master_sd=17d21ac7-5859-4f25-8de7-2a9433d50c11 domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'] masterVersion=9 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-32393::INFO::2013-09-20 13:16:35,166::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,177::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,188::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::WARNING::2013-09-20 13:16:35,189::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=61b814a65ea3ede1f0ae1d58e139adc06bf9eda5'] Thread-32393::DEBUG::2013-09-20 13:16:35,194::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32393::INFO::2013-09-20 13:16:35,194::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 (id: 250) Thread-32393::ERROR::2013-09-20 13:16:36,196::task::850::TaskManager.Task::(_setError) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId( self.id ) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-32393::DEBUG::2013-09-20 13:16:36,196::task::869::TaskManager.Task::(_run) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._run: 72348d40-8442-4dbf-bc66-1d354da5fc31 (None, '5849b030-626e-47cb-ad90-3ce782d831b3', 'Default', '17d21ac7-5859-4f25-8de7-2a9433d50c11', ['17d21ac7-5859-4f25-8de7-2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed - stopping task Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1194::TaskManager.Task::(stop) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::stopping in state preparing (force False) Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting True Thread-32393::INFO::2013-09-20 13:16:36,197::task::1151::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1156::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Prepare: aborted: Cannot acquire host id Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 0 aborting True Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::909::TaskManager.Task::(_doAbort) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._doAbort: force False Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state preparing -> state aborting Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::534::TaskManager.Task::(__state_aborting) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_aborting: recover policy none Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state aborting -> state failed Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>, 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-32393::ERROR::2013-09-20 13:16:36,200::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-32398::DEBUG::2013-09-20 13:16:48,921::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state init -> state preparing Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::1168::TaskManager.Task::(prepare) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::finished: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state preparing -> state finished
Steve Dainard Infrastructure Manager Miovision | Rethink Traffic 519-513-2407 ex.250 877-646-8476 (toll-free)
Blog | LinkedIn | Twitter | Facebook Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty < deepakcs@linux.vnet.ibm.com > wrote:
Either you can use the volume set .. option as mentioned in the wikipage
--or --
If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and
click on "Optimize for virt. store". That should also set the volume options in addition to few other things
thanx,
deepak
______________________________ _________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Found this in /var/log/vdsm/vdsm.log: Thread-1080::INFO::2013-10-15 16:39:43,656::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'abfea73c-31d5-43be-97f1-998fd184d98f'}], options=None) I'm assuming that UUID shouldn't be zero's... messages: Oct 15 17:17:10 ovirt001 vdsm initSANLock ERROR Cannot initialize SANLock for domain 636abf41-5133-4252-9aef-8a32ec10668e Oct 15 17:17:11 ovirt001 sanlock[1284]: 2013-10-15 17:17:11-0400 1228 [4075]: write_sectors delta_leader offset 127488 rv -22 /rhev/data-center/mnt/glusterSD/ovirt001:rep2-virt/636abf41-5133-4252-9aef-8a32ec10668e/dom_md/ids Oct 15 17:17:12 ovirt001 sanlock[1284]: 2013-10-15 17:17:12-0400 1229 [1297]: s3 add_lockspace fail result -22 Oct 15 17:17:12 ovirt001 vdsm TaskManager.Task ERROR Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::Unexpected error sanlock.log: 2013-10-15 17:17:11-0400 1228 [1297]: s3 lockspace 636abf41-5133-4252-9aef-8a32ec10668e:250:/rhev/data-center/mnt/glusterSD/ovirt001:rep2-virt/636abf41-5133-4252-9aef-8a32ec10668e/dom_md/ids:0 2013-10-15 17:17:11-0400 1228 [4075]: 636abf41 aio collect 1 0x7f41f80008c0:0x7f41f80008d0:0x7f41f8101000 result -22:0 match res 2013-10-15 17:17:11-0400 1228 [4075]: write_sectors delta_leader offset 127488 rv -22 /rhev/data-center/mnt/glusterSD/ovirt001:rep2-virt/636abf41-5133-4252-9aef-8a32ec10668e/dom_md/ids 2013-10-15 17:17:12-0400 1229 [1297]: s3 add_lockspace fail result -22 vdsm.log: Thread-488::DEBUG::2013-10-15 17:17:07,316::BindingXMLRPC::177::vds::(wrapper) client [10.0.6.22] Thread-488::DEBUG::2013-10-15 17:17:07,317::task::579::TaskManager.Task::(_updateState) Task=`6a1db668-e38b-4698-8a34-554f55182850`::moving from state init -> state preparing Thread-488::INFO::2013-10-15 17:17:07,318::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'ovirt001:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-488::DEBUG::2013-10-15 17:17:07,323::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs ovirt001:rep2-virt /rhev/data-center/mnt/glusterSD/ovirt001:rep2-virt' (cwd None) Thread-488::DEBUG::2013-10-15 17:17:07,423::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-488::DEBUG::2013-10-15 17:17:07,506::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: () Thread-488::DEBUG::2013-10-15 17:17:07,507::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {4a81a0e0-a95a-4e02-853f-68df420a7ce4: storage.glusterSD.findDomain} Thread-488::INFO::2013-10-15 17:17:07,507::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-488::DEBUG::2013-10-15 17:17:07,507::task::1168::TaskManager.Task::(prepare) Task=`6a1db668-e38b-4698-8a34-554f55182850`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-488::DEBUG::2013-10-15 17:17:07,507::task::579::TaskManager.Task::(_updateState) Task=`6a1db668-e38b-4698-8a34-554f55182850`::moving from state preparing -> state finished Thread-488::DEBUG::2013-10-15 17:17:07,507::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-488::DEBUG::2013-10-15 17:17:07,508::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-488::DEBUG::2013-10-15 17:17:07,508::task::974::TaskManager.Task::(_decref) Task=`6a1db668-e38b-4698-8a34-554f55182850`::ref 0 aborting False Thread-490::DEBUG::2013-10-15 17:17:07,709::BindingXMLRPC::177::vds::(wrapper) client [10.0.6.22] Thread-490::DEBUG::2013-10-15 17:17:07,710::task::579::TaskManager.Task::(_updateState) Task=`95b7da65-b45f-4836-9103-b64c6b5fc15a`::moving from state init -> state preparing Thread-490::INFO::2013-10-15 17:17:07,710::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID='636abf41-5133-4252-9aef-8a32ec10668e', domainName='rep2-virt', typeSpecificArg='ovirt001:rep2-virt', domClass=1, domVersion='3', options=None) Thread-490::DEBUG::2013-10-15 17:17:07,710::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-490::DEBUG::2013-10-15 17:17:07,710::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-490::DEBUG::2013-10-15 17:17:07,711::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-490::DEBUG::2013-10-15 17:17:07,711::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-490::DEBUG::2013-10-15 17:17:07,711::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-490::DEBUG::2013-10-15 17:17:07,727::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-490::DEBUG::2013-10-15 17:17:07,727::misc::817::SamplingMethod::(__call__) Returning last result Thread-490::DEBUG::2013-10-15 17:17:09,750::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-490::DEBUG::2013-10-15 17:17:09,943::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-490::DEBUG::2013-10-15 17:17:09,944::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,944::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,944::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,945::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,945::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,945::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,945::misc::817::SamplingMethod::(__call__) Returning last result Thread-490::ERROR::2013-10-15 17:17:09,945::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 636abf41-5133-4252-9aef-8a32ec10668e Thread-490::ERROR::2013-10-15 17:17:09,945::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 636abf41-5133-4252-9aef-8a32ec10668e Thread-490::DEBUG::2013-10-15 17:17:09,946::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-490::DEBUG::2013-10-15 17:17:09,947::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free 636abf41-5133-4252-9aef-8a32ec10668e' (cwd None) Thread-490::DEBUG::2013-10-15 17:17:09,974::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "636abf41-5133-4252-9aef-8a32ec10668e" not found\n'; <rc> = 5 Thread-490::WARNING::2013-10-15 17:17:09,975::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "636abf41-5133-4252-9aef-8a32ec10668e" not found'] Thread-490::DEBUG::2013-10-15 17:17:09,975::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-490::ERROR::2013-10-15 17:17:09,981::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 636abf41-5133-4252-9aef-8a32ec10668e not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('636abf41-5133-4252-9aef-8a32ec10668e',) Thread-490::INFO::2013-10-15 17:17:09,982::nfsSD::69::Storage.StorageDomain::(create) sdUUID=636abf41-5133-4252-9aef-8a32ec10668e domainName=rep2-virt remotePath=ovirt001:rep2-virt domClass=1 Thread-490::DEBUG::2013-10-15 17:17:10,104::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-490::DEBUG::2013-10-15 17:17:10,115::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[] Thread-490::WARNING::2013-10-15 17:17:10,115::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is Thread-490::DEBUG::2013-10-15 17:17:10,115::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-490::DEBUG::2013-10-15 17:17:10,115::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-490::DEBUG::2013-10-15 17:17:10,115::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=a38d3de67a9f2b5bc3979eb5f4dfbce7d48986d7'] Thread-490::DEBUG::2013-10-15 17:17:10,119::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-490::DEBUG::2013-10-15 17:17:10,122::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/ovirt001:rep2-virt/636abf41-5133-4252-9aef-8a32ec10668e Thread-490::DEBUG::2013-10-15 17:17:10,122::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-490::DEBUG::2013-10-15 17:17:10,133::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=a38d3de67a9f2b5bc3979eb5f4dfbce7d48986d7'] Thread-490::DEBUG::2013-10-15 17:17:10,135::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-490::DEBUG::2013-10-15 17:17:10,136::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '636abf41-5133-4252-9aef-8a32ec10668e_imageNS' Thread-490::DEBUG::2013-10-15 17:17:10,136::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '636abf41-5133-4252-9aef-8a32ec10668e_volumeNS' Thread-490::DEBUG::2013-10-15 17:17:10,136::clusterlock::137::initSANLock::(initSANLock) Initializing SANLock for domain 636abf41-5133-4252-9aef-8a32ec10668e Thread-490::ERROR::2013-10-15 17:17:10,731::clusterlock::145::initSANLock::(initSANLock) Cannot initialize SANLock for domain 636abf41-5133-4252-9aef-8a32ec10668e Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-490::WARNING::2013-10-15 17:17:10,732::sd::428::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-490::DEBUG::2013-10-15 17:17:10,732::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs: {636abf41-5133-4252-9aef-8a32ec10668e: storage.glusterSD.findDomain, 4a81a0e0-a95a-4e02-853f-68df420a7ce4: storage.glusterSD.findDomain} Thread-490::INFO::2013-10-15 17:17:10,732::logUtils::47::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-490::DEBUG::2013-10-15 17:17:10,732::task::1168::TaskManager.Task::(prepare) Task=`95b7da65-b45f-4836-9103-b64c6b5fc15a`::finished: None Thread-490::DEBUG::2013-10-15 17:17:10,733::task::579::TaskManager.Task::(_updateState) Task=`95b7da65-b45f-4836-9103-b64c6b5fc15a`::moving from state preparing -> state finished Thread-490::DEBUG::2013-10-15 17:17:10,733::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-490::DEBUG::2013-10-15 17:17:10,733::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-490::DEBUG::2013-10-15 17:17:10,733::task::974::TaskManager.Task::(_decref) Task=`95b7da65-b45f-4836-9103-b64c6b5fc15a`::ref 0 aborting False Thread-494::DEBUG::2013-10-15 17:17:10,739::BindingXMLRPC::177::vds::(wrapper) client [10.0.6.22] Thread-494::DEBUG::2013-10-15 17:17:10,740::task::579::TaskManager.Task::(_updateState) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::moving from state init -> state preparing Thread-494::INFO::2013-10-15 17:17:10,740::logUtils::44::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='636abf41-5133-4252-9aef-8a32ec10668e', options=None) Thread-494::DEBUG::2013-10-15 17:17:10,741::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.636abf41-5133-4252-9aef-8a32ec10668e`ReqID=`0602d0b9-a7ff-4f3f-80bd-c886fe4e3270`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at 'getStorageDomainStats' Thread-494::DEBUG::2013-10-15 17:17:10,741::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' for lock type 'shared' Thread-494::DEBUG::2013-10-15 17:17:10,741::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' is free. Now locking as 'shared' (1 active user) Thread-494::DEBUG::2013-10-15 17:17:10,741::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.636abf41-5133-4252-9aef-8a32ec10668e`ReqID=`0602d0b9-a7ff-4f3f-80bd-c886fe4e3270`::Granted request Thread-494::DEBUG::2013-10-15 17:17:10,742::task::811::TaskManager.Task::(resourceAcquired) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::_resourcesAcquired: Storage.636abf41-5133-4252-9aef-8a32ec10668e (shared) Thread-494::DEBUG::2013-10-15 17:17:10,742::task::974::TaskManager.Task::(_decref) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::ref 1 aborting False Thread-494::INFO::2013-10-15 17:17:10,743::logUtils::47::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14144560824320', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-494::DEBUG::2013-10-15 17:17:10,744::task::1168::TaskManager.Task::(prepare) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14144560824320', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-494::DEBUG::2013-10-15 17:17:10,744::task::579::TaskManager.Task::(_updateState) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::moving from state preparing -> state finished Thread-494::DEBUG::2013-10-15 17:17:10,744::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.636abf41-5133-4252-9aef-8a32ec10668e': < ResourceRef 'Storage.636abf41-5133-4252-9aef-8a32ec10668e', isValid: 'True' obj: 'None'>} Thread-494::DEBUG::2013-10-15 17:17:10,744::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-494::DEBUG::2013-10-15 17:17:10,744::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' Thread-494::DEBUG::2013-10-15 17:17:10,745::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' (0 active users) Thread-494::DEBUG::2013-10-15 17:17:10,745::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' is free, finding out if anyone is waiting for it. Thread-494::DEBUG::2013-10-15 17:17:10,745::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e', Clearing records. Thread-494::DEBUG::2013-10-15 17:17:10,745::task::974::TaskManager.Task::(_decref) Task=`7b5b467e-a1e5-413c-934d-18e337cf6dd8`::ref 0 aborting False Thread-495::DEBUG::2013-10-15 17:17:11,113::BindingXMLRPC::177::vds::(wrapper) client [10.0.6.22] Thread-495::DEBUG::2013-10-15 17:17:11,113::task::579::TaskManager.Task::(_updateState) Task=`d217edaf-c57b-4c28-aceb-5d68ec306531`::moving from state init -> state preparing Thread-495::INFO::2013-10-15 17:17:11,114::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 'ovirt001:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '648e518f-11eb-48d1-b7cd-64c7b7cf6204'}], options=None) Thread-495::DEBUG::2013-10-15 17:17:11,119::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-495::DEBUG::2013-10-15 17:17:11,125::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ('636abf41-5133-4252-9aef-8a32ec10668e',) Thread-495::DEBUG::2013-10-15 17:17:11,126::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {636abf41-5133-4252-9aef-8a32ec10668e: storage.glusterSD.findDomain, 4a81a0e0-a95a-4e02-853f-68df420a7ce4: storage.glusterSD.findDomain} Thread-495::INFO::2013-10-15 17:17:11,126::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '648e518f-11eb-48d1-b7cd-64c7b7cf6204'}]} Thread-495::DEBUG::2013-10-15 17:17:11,126::task::1168::TaskManager.Task::(prepare) Task=`d217edaf-c57b-4c28-aceb-5d68ec306531`::finished: {'statuslist': [{'status': 0, 'id': '648e518f-11eb-48d1-b7cd-64c7b7cf6204'}]} Thread-495::DEBUG::2013-10-15 17:17:11,126::task::579::TaskManager.Task::(_updateState) Task=`d217edaf-c57b-4c28-aceb-5d68ec306531`::moving from state preparing -> state finished Thread-495::DEBUG::2013-10-15 17:17:11,126::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-495::DEBUG::2013-10-15 17:17:11,127::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-495::DEBUG::2013-10-15 17:17:11,127::task::974::TaskManager.Task::(_decref) Task=`d217edaf-c57b-4c28-aceb-5d68ec306531`::ref 0 aborting False Thread-497::DEBUG::2013-10-15 17:17:11,133::BindingXMLRPC::177::vds::(wrapper) client [10.0.6.22] Thread-497::DEBUG::2013-10-15 17:17:11,133::task::579::TaskManager.Task::(_updateState) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::moving from state init -> state preparing Thread-497::INFO::2013-10-15 17:17:11,134::logUtils::44::dispatcher::(wrapper) Run and protect: createStoragePool(poolType=None, spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default', masterDom='636abf41-5133-4252-9aef-8a32ec10668e', domList=['636abf41-5133-4252-9aef-8a32ec10668e'], masterVersion=12, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-497::INFO::2013-10-15 17:17:11,134::fileSD::315::Storage.StorageDomain::(validate) sdUUID=636abf41-5133-4252-9aef-8a32ec10668e Thread-497::DEBUG::2013-10-15 17:17:11,144::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=a38d3de67a9f2b5bc3979eb5f4dfbce7d48986d7'] Thread-497::DEBUG::2013-10-15 17:17:11,145::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`938eb5db-1766-43df-8a91-ef34759cfc06`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '954' at 'createStoragePool' Thread-497::DEBUG::2013-10-15 17:17:11,145::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'exclusive' Thread-497::DEBUG::2013-10-15 17:17:11,146::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-497::DEBUG::2013-10-15 17:17:11,146::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`938eb5db-1766-43df-8a91-ef34759cfc06`::Granted request Thread-497::DEBUG::2013-10-15 17:17:11,146::task::811::TaskManager.Task::(resourceAcquired) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive) Thread-497::DEBUG::2013-10-15 17:17:11,146::task::974::TaskManager.Task::(_decref) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::ref 1 aborting False Thread-497::DEBUG::2013-10-15 17:17:11,147::resourceManager::197::ResourceManager.Request::(__init__) ResName=`Storage.636abf41-5133-4252-9aef-8a32ec10668e`ReqID=`79bba45b-37b7-417f-bfbf-fc96603ed997`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '956' at 'createStoragePool' Thread-497::DEBUG::2013-10-15 17:17:11,147::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' for lock type 'exclusive' Thread-497::DEBUG::2013-10-15 17:17:11,147::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' is free. Now locking as 'exclusive' (1 active user) Thread-497::DEBUG::2013-10-15 17:17:11,147::resourceManager::237::ResourceManager.Request::(grant) ResName=`Storage.636abf41-5133-4252-9aef-8a32ec10668e`ReqID=`79bba45b-37b7-417f-bfbf-fc96603ed997`::Granted request Thread-497::DEBUG::2013-10-15 17:17:11,147::task::811::TaskManager.Task::(resourceAcquired) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::_resourcesAcquired: Storage.636abf41-5133-4252-9aef-8a32ec10668e (exclusive) Thread-497::DEBUG::2013-10-15 17:17:11,148::task::974::TaskManager.Task::(_decref) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::ref 1 aborting False Thread-497::INFO::2013-10-15 17:17:11,148::sp::592::Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default master_sd=636abf41-5133-4252-9aef-8a32ec10668e domList=['636abf41-5133-4252-9aef-8a32ec10668e'] masterVersion=12 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-497::INFO::2013-10-15 17:17:11,148::fileSD::315::Storage.StorageDomain::(validate) sdUUID=636abf41-5133-4252-9aef-8a32ec10668e Thread-497::DEBUG::2013-10-15 17:17:11,158::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=a38d3de67a9f2b5bc3979eb5f4dfbce7d48986d7'] Thread-497::DEBUG::2013-10-15 17:17:11,168::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=a38d3de67a9f2b5bc3979eb5f4dfbce7d48986d7'] Thread-497::WARNING::2013-10-15 17:17:11,168::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists Thread-497::DEBUG::2013-10-15 17:17:11,168::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-497::DEBUG::2013-10-15 17:17:11,169::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-497::DEBUG::2013-10-15 17:17:11,169::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=ovirt001:rep2-virt', 'ROLE=Regular', 'SDUUID=636abf41-5133-4252-9aef-8a32ec10668e', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=acf2b6bfff3dc3de8e33a5b08af5fc959a01bddc'] Thread-497::DEBUG::2013-10-15 17:17:11,184::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-497::INFO::2013-10-15 17:17:11,184::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 636abf41-5133-4252-9aef-8a32ec10668e (id: 250) Thread-498::DEBUG::2013-10-15 17:17:11,697::task::579::TaskManager.Task::(_updateState) Task=`7f1a4313-262f-4458-96af-8e5a0d3626d1`::moving from state init -> state preparing Thread-498::INFO::2013-10-15 17:17:11,698::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-498::INFO::2013-10-15 17:17:11,698::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-498::DEBUG::2013-10-15 17:17:11,698::task::1168::TaskManager.Task::(prepare) Task=`7f1a4313-262f-4458-96af-8e5a0d3626d1`::finished: {} Thread-498::DEBUG::2013-10-15 17:17:11,698::task::579::TaskManager.Task::(_updateState) Task=`7f1a4313-262f-4458-96af-8e5a0d3626d1`::moving from state preparing -> state finished Thread-498::DEBUG::2013-10-15 17:17:11,698::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-498::DEBUG::2013-10-15 17:17:11,698::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-498::DEBUG::2013-10-15 17:17:11,699::task::974::TaskManager.Task::(_decref) Task=`7f1a4313-262f-4458-96af-8e5a0d3626d1`::ref 0 aborting False Thread-497::ERROR::2013-10-15 17:17:12,186::task::850::TaskManager.Task::(_setError) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId(self.id) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('636abf41-5133-4252-9aef-8a32ec10668e', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-497::DEBUG::2013-10-15 17:17:12,186::task::869::TaskManager.Task::(_run) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::Task._run: e53f26ec-66b9-4f01-aa10-053a738780b8 (None, '5849b030-626e-47cb-ad90-3ce782d831b3', 'Default', '636abf41-5133-4252-9aef-8a32ec10668e', ['636abf41-5133-4252-9aef-8a32ec10668e'], 12, None, 5, 60, 10, 3) {} failed - stopping task Thread-497::DEBUG::2013-10-15 17:17:12,187::task::1194::TaskManager.Task::(stop) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::stopping in state preparing (force False) Thread-497::DEBUG::2013-10-15 17:17:12,187::task::974::TaskManager.Task::(_decref) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::ref 1 aborting True Thread-497::INFO::2013-10-15 17:17:12,187::task::1151::TaskManager.Task::(prepare) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-497::DEBUG::2013-10-15 17:17:12,187::task::1156::TaskManager.Task::(prepare) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::Prepare: aborted: Cannot acquire host id Thread-497::DEBUG::2013-10-15 17:17:12,187::task::974::TaskManager.Task::(_decref) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::ref 0 aborting True Thread-497::DEBUG::2013-10-15 17:17:12,187::task::909::TaskManager.Task::(_doAbort) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::Task._doAbort: force False Thread-497::DEBUG::2013-10-15 17:17:12,188::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-497::DEBUG::2013-10-15 17:17:12,188::task::579::TaskManager.Task::(_updateState) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::moving from state preparing -> state aborting Thread-497::DEBUG::2013-10-15 17:17:12,188::task::534::TaskManager.Task::(__state_aborting) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::_aborting: recover policy none Thread-497::DEBUG::2013-10-15 17:17:12,188::task::579::TaskManager.Task::(_updateState) Task=`e53f26ec-66b9-4f01-aa10-053a738780b8`::moving from state aborting -> state failed Thread-497::DEBUG::2013-10-15 17:17:12,188::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>, 'Storage.636abf41-5133-4252-9aef-8a32ec10668e': < ResourceRef 'Storage.636abf41-5133-4252-9aef-8a32ec10668e', isValid: 'True' obj: 'None'>} Thread-497::DEBUG::2013-10-15 17:17:12,188::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-497::DEBUG::2013-10-15 17:17:12,189::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-497::DEBUG::2013-10-15 17:17:12,189::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-497::DEBUG::2013-10-15 17:17:12,189::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-497::DEBUG::2013-10-15 17:17:12,189::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-497::DEBUG::2013-10-15 17:17:12,189::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' Thread-497::DEBUG::2013-10-15 17:17:12,190::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' (0 active users) Thread-497::DEBUG::2013-10-15 17:17:12,190::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e' is free, finding out if anyone is waiting for it. Thread-497::DEBUG::2013-10-15 17:17:12,190::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.636abf41-5133-4252-9aef-8a32ec10668e', Clearing records. Thread-497::ERROR::2013-10-15 17:17:12,190::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot acquire host id: ('636abf41-5133-4252-9aef-8a32ec10668e', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-504::DEBUG::2013-10-15 17:17:27,018::task::579::TaskManager.Task::(_updateState) Task=`4df542ea-0b2a-489a-b3d7-9299841ca2d6`::moving from state init -> state preparing Thread-504::INFO::2013-10-15 17:17:27,019::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-504::INFO::2013-10-15 17:17:27,019::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-504::DEBUG::2013-10-15 17:17:27,019::task::1168::TaskManager.Task::(prepare) Task=`4df542ea-0b2a-489a-b3d7-9299841ca2d6`::finished: {} Thread-504::DEBUG::2013-10-15 17:17:27,019::task::579::TaskManager.Task::(_updateState) Task=`4df542ea-0b2a-489a-b3d7-9299841ca2d6`::moving from state preparing -> state finished Thread-504::DEBUG::2013-10-15 17:17:27,019::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-504::DEBUG::2013-10-15 17:17:27,019::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-504::DEBUG::2013-10-15 17:17:27,019::task::974::TaskManager.Task::(_decref) Task=`4df542ea-0b2a-489a-b3d7-9299841ca2d6`::ref 0 aborting False Thread-510::DEBUG::2013-10-15 17:17:42,194::task::579::TaskManager.Task::(_updateState) Task=`03b081b7-430b-4207-9272-ba5785f63cb1`::moving from state init -> state preparing Thread-510::INFO::2013-10-15 17:17:42,194::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-510::INFO::2013-10-15 17:17:42,194::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-510::DEBUG::2013-10-15 17:17:42,195::task::1168::TaskManager.Task::(prepare) Task=`03b081b7-430b-4207-9272-ba5785f63cb1`::finished: {} Thread-510::DEBUG::2013-10-15 17:17:42,195::task::579::TaskManager.Task::(_updateState) Task=`03b081b7-430b-4207-9272-ba5785f63cb1`::moving from state preparing -> state finished Thread-510::DEBUG::2013-10-15 17:17:42,195::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-510::DEBUG::2013-10-15 17:17:42,195::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-510::DEBUG::2013-10-15 17:17:42,195::task::974::TaskManager.Task::(_decref) Task=`03b081b7-430b-4207-9272-ba5785f63cb1`::ref 0 aborting False Thread-516::DEBUG::2013-10-15 17:17:57,328::task::579::TaskManager.Task::(_updateState) Task=`535b1567-e5de-4a2d-9a34-080f3cea5de6`::moving from state init -> state preparing Thread-516::INFO::2013-10-15 17:17:57,328::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-516::INFO::2013-10-15 17:17:57,328::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-516::DEBUG::2013-10-15 17:17:57,328::task::1168::TaskManager.Task::(prepare) Task=`535b1567-e5de-4a2d-9a34-080f3cea5de6`::finished: {} Thread-516::DEBUG::2013-10-15 17:17:57,329::task::579::TaskManager.Task::(_updateState) Task=`535b1567-e5de-4a2d-9a34-080f3cea5de6`::moving from state preparing -> state finished Thread-516::DEBUG::2013-10-15 17:17:57,329::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-516::DEBUG::2013-10-15 17:17:57,329::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-516::DEBUG::2013-10-15 17:17:57,329::task::974::TaskManager.Task::(_decref) Task=`535b1567-e5de-4a2d-9a34-080f3cea5de6`::ref 0 aborting False *Steve Dainard * IT Infrastructure Manager Miovision <http://miovision.com/> | *Rethink Traffic* 519-513-2407 ex.250 877-646-8476 (toll-free) *Blog <http://miovision.com/blog> | **LinkedIn<https://www.linkedin.com/company/miovision-technologies> | Twitter <https://twitter.com/miovision> | Facebook<https://www.facebook.com/miovision> * ------------------------------ Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately. On Thu, Oct 3, 2013 at 8:02 AM, Allon Mureinik <amureini@redhat.com> wrote:
The root cause seems to be a SANlock issue: Thread-32382::ERROR:: 2013-09-20 13:16:34,126::clusterlock::145::initSANLock: initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING:: 2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain: initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: ()
Can you include /var/log/sanlock.log and /var/log/messages please?
----- Original Message -----
From: "Steve Dainard" <sdainard@miovision.com> To: "Deepak C Shetty" <deepakcs@linux.vnet.ibm.com> Cc: "users" <users@ovirt.org> Sent: Friday, September 20, 2013 8:30:59 PM Subject: Re: [Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error
Awesome, thanks guys. Its weird that that article tells you to set with 'key=value' rather than 'key value' must be some legacy stuff.
Once those changes are in place I hit a different error. Deepak, maybe you've seen this one on new storage domain add:
[root@ovirt-manager2 ~]# tail -f /var/log/ovirt-engine/engine.log 2013-09-20 13:16:36,226 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) Command CreateStoragePoolVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) 2013-09-20 13:16:36,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp--127.0.0.1-8702-9) FINISH, CreateStoragePoolVDSCommand, log id: 672635cc 2013-09-20 13:16:36,231 ERROR [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Command org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStoragePoolVDS, error = Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) (Failed with VDSM error AcquireHostIdFailure and code 661) 2013-09-20 13:16:36,296 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 11070337, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domains to Data Center Default. (User: admin@internal) 2013-09-20 13:16:36,299 INFO [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp--127.0.0.1-8702-9) Lock freed to object EngineLock [exclusiveLocks= key: 5849b030-626e-47cb-ad90-3ce782d831b3 value: POOL , sharedLocks= ] 2013-09-20 13:16:36,387 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StoragePool; snapshot: id=5849b030-626e-47cb-ad90-3ce782d831b3. 2013-09-20 13:16:36,398 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, storageId = 17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,425 INFO [org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand] (ajp--127.0.0.1-8702-9) Command [id=293a1e97-e949-4c17-92c6-c01f2221204e]: Compensating CHANGED_ENTITY of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: id=17d21ac7-5859-4f25-8de7-2a9433d50c11. 2013-09-20 13:16:36,464 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) Correlation ID: 302ae6eb, Job ID: 014ec59b-e6d7-4e5e-b588-4fb0dfa8f1c8, Call Stack: null, Custom Event ID: -1, Message: Failed to attach Storage Domain rep2-virt to Data Center Default. (User: admin@internal)
[root@ovirt001 ~]# tail -f /var/log/vdsm/vdsm.log Thread-32374::DEBUG::2013-09-20 13:16:18,107::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state init -> state preparing Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32374::INFO::2013-09-20 13:16:18,107::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::1168::TaskManager.Task::(prepare) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::finished: {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::579::TaskManager.Task::(_updateState) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::moving from state preparing -> state finished Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32374::DEBUG::2013-09-20 13:16:18,108::task::974::TaskManager.Task::(_decref) Task=`f4cab975-d5fa-463a-990e-ab32686c6806`::ref 0 aborting False Thread-32379::DEBUG::2013-09-20 13:16:29,509::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32379::DEBUG::2013-09-20 13:16:29,510::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state init -> state preparing Thread-32379::INFO::2013-09-20 13:16:29,510::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None) Thread-32379::DEBUG::2013-09-20 13:16:29,516::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: () Thread-32379::DEBUG::2013-09-20 13:16:29,523::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain} Thread-32379::INFO::2013-09-20 13:16:29,523::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,523::task::1168::TaskManager.Task::(prepare) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::finished: {'statuslist': [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::579::TaskManager.Task::(_updateState) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::moving from state preparing -> state finished Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32379::DEBUG::2013-09-20 13:16:29,524::task::974::TaskManager.Task::(_decref) Task=`1ad55ba1-afaa-4524-b3fa-3d55a421e8bc`::ref 0 aborting False Thread-32382::DEBUG::2013-09-20 13:16:29,888::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32382::DEBUG::2013-09-20 13:16:29,888::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state init -> state preparing Thread-32382::INFO::2013-09-20 13:16:29,889::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=7, sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', domainName='rep2-virt', typeSpecificArg='192.168.1.1:rep2-virt', domClass=1, domVersion='3', options=None) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::807::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-32382::DEBUG::2013-09-20 13:16:29,889::misc::809::SamplingMethod::(__call__) Got in to sampling method Thread-32382::DEBUG::2013-09-20 13:16:29,889::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:29,904::iscsiadm::91::Storage.Misc.excCmd::(_runCmd) FAILED: <err> = 'iscsiadm: No session found.\n'; <rc> = 21 Thread-32382::DEBUG::2013-09-20 13:16:29,904::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::DEBUG::2013-09-20 13:16:32,931::multipath::111::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,255::multipath::111::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::483::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::485::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::494::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,256::lvm::496::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::514::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::lvm::516::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,257::misc::817::SamplingMethod::(__call__) Returning last result Thread-32382::ERROR::2013-09-20 13:16:33,257::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::ERROR::2013-09-20
13:16:33,257::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,258::lvm::374::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-32382::DEBUG::2013-09-20 13:16:33,259::lvm::311::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \'r|.*|\' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
17d21ac7-5859-4f25-8de7-2a9433d50c11' (cwd None) Thread-32382::DEBUG::2013-09-20 13:16:33,285::lvm::311::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found\n'; <rc> = 5 Thread-32382::WARNING::2013-09-20 13:16:33,286::lvm::379::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "17d21ac7-5859-4f25-8de7-2a9433d50c11" not found'] Thread-32382::DEBUG::2013-09-20 13:16:33,286::lvm::403::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-32382::ERROR::2013-09-20 13:16:33,295::sdc::143::Storage.StorageDomainCache::(_findDomain) domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: ('17d21ac7-5859-4f25-8de7-2a9433d50c11',) Thread-32382::INFO::2013-09-20 13:16:33,295::nfsSD::69::Storage.StorageDomain::(create) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 domainName=rep2-virt remotePath=192.168.1.1:rep2-virt domClass=1 Thread-32382::DEBUG::2013-09-20 13:16:33,430::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,445::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=[] Thread-32382::WARNING::2013-09-20 13:16:33,446::persistentDict::256::Storage.PersistentDict::(refresh) data has no embedded checksum - trust it as it is Thread-32382::DEBUG::2013-09-20 13:16:33,447::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32382::DEBUG::2013-09-20 13:16:33,448::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32382::DEBUG::2013-09-20 13:16:33,449::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,454::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32382::DEBUG::2013-09-20 13:16:33,457::fileSD::153::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/192.168.1.1: rep2-virt/17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32382::DEBUG::2013-09-20 13:16:33,457::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-32382::DEBUG::2013-09-20 13:16:33,469::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32382::DEBUG::2013-09-20 13:16:33,472::fileSD::535::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-32382::DEBUG::2013-09-20 13:16:33,472::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_imageNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::resourceManager::420::ResourceManager::(registerNamespace) Registering namespace '17d21ac7-5859-4f25-8de7-2a9433d50c11_volumeNS' Thread-32382::DEBUG::2013-09-20 13:16:33,473::clusterlock::137::initSANLock::(initSANLock) Initializing SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32387::DEBUG::2013-09-20 13:16:33,717::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state init -> state preparing Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32387::INFO::2013-09-20 13:16:33,718::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::1168::TaskManager.Task::(prepare) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::finished: {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::task::579::TaskManager.Task::(_updateState) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::moving from state preparing -> state finished Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32387::DEBUG::2013-09-20 13:16:33,718::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32387::DEBUG::2013-09-20 13:16:33,719::task::974::TaskManager.Task::(_decref) Task=`0e11c8e5-4e40-4d28-9eaf-129db67b2f4d`::ref 0 aborting False Thread-32382::ERROR::2013-09-20 13:16:34,126::clusterlock::145::initSANLock::(initSANLock) Cannot initialize SANLock for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 Traceback (most recent call last): File "/usr/share/vdsm/storage/clusterlock.py", line 140, in initSANLock sanlock.init_lockspace(sdUUID, idsPath) SanlockException: (22, 'Sanlock lockspace init failure', 'Invalid argument') Thread-32382::WARNING::2013-09-20 13:16:34,127::sd::428::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 423, in initSPMlease self._clusterLock.initLock() File "/usr/share/vdsm/storage/clusterlock.py", line 163, in initLock initSANLock(self._sdUUID, self._idsPath, self._leasesPath) File "/usr/share/vdsm/storage/clusterlock.py", line 146, in initSANLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-32382::DEBUG::2013-09-20 13:16:34,127::hsm::2624::Storage.HSM::(createStorageDomain) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32382::INFO::2013-09-20 13:16:34,128::logUtils::47::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::1168::TaskManager.Task::(prepare) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::finished: None Thread-32382::DEBUG::2013-09-20 13:16:34,128::task::579::TaskManager.Task::(_updateState) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::moving from state preparing -> state finished Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32382::DEBUG::2013-09-20 13:16:34,128::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32382::DEBUG::2013-09-20 13:16:34,129::task::974::TaskManager.Task::(_decref) Task=`a3ba925b-65ee-42a6-8506-927a06f63995`::ref 0 aborting False Thread-32389::DEBUG::2013-09-20 13:16:34,219::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32389::DEBUG::2013-09-20 13:16:34,219::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state init -> state preparing Thread-32389::INFO::2013-09-20 13:16:34,220::logUtils::44::dispatcher::(wrapper) Run and protect: getStorageDomainStats(sdUUID='17d21ac7-5859-4f25-8de7-2a9433d50c11', options=None) Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '2784' at 'getStorageDomainStats' Thread-32389::DEBUG::2013-09-20 13:16:34,220::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'shared' Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'shared' (1 active user) Thread-32389::DEBUG::2013-09-20 13:16:34,221::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`9f37d808-9ad2-4c06-99ef-449b43049e80`::Granted
request Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::811::TaskManager.Task::(resourceAcquired) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (shared) Thread-32389::DEBUG::2013-09-20 13:16:34,221::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 1 aborting False Thread-32389::INFO::2013-09-20 13:16:34,223::logUtils::47::dispatcher::(wrapper) Run and protect: getStorageDomainStats, Return response: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::1168::TaskManager.Task::(prepare) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::finished: {'stats': {'mdasize': 0, 'mdathreshold': True, 'mdavalid': True, 'diskfree': '14182986809344', 'disktotal': '14199600185344', 'mdafree': 0}} Thread-32389::DEBUG::2013-09-20 13:16:34,223::task::579::TaskManager.Task::(_updateState) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::moving from state preparing -> state finished Thread-32389::DEBUG::2013-09-20 13:16:34,223::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32389::DEBUG::2013-09-20 13:16:34,224::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32389::DEBUG::2013-09-20 13:16:34,225::task::974::TaskManager.Task::(_decref) Task=`a0d5d4d6-dcb7-4293-bf8a-cf1e2204f586`::ref 0 aborting False Thread-32390::DEBUG::2013-09-20 13:16:35,099::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32390::DEBUG::2013-09-20 13:16:35,099::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state init -> state preparing Thread-32390::INFO::2013-09-20 13:16:35,099::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': '192.168.1.1:rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': '******', 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}], options=None) Thread-32390::DEBUG::2013-09-20 13:16:35,105::hsm::2333::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-32390::DEBUG::2013-09-20 13:16:35,112::hsm::2345::Storage.HSM::(__prefetchDomains) Found SD uuids: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', '17d21ac7-5859-4f25-8de7-2a9433d50c11') Thread-32390::DEBUG::2013-09-20 13:16:35,113::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {b72b61d1-e11c-496d-ad3a-6f566a1f0ad1: storage.glusterSD.findDomain, 983c4aa1-7b00-4d3b-b6ad-1fd2cf9297ce: storage.glusterSD.findDomain, b91afb39-f96e-4eb3-bc6c-9f08fa16869c: storage.glusterSD.findDomain, 17d21ac7-5859-4f25-8de7-2a9433d50c11: storage.glusterSD.findDomain} Thread-32390::INFO::2013-09-20 13:16:35,113::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::1168::TaskManager.Task::(prepare) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::finished: {'statuslist': [{'status': 0, 'id': 'cecee482-87e1-4ecc-8bda-0e0ec84d7792'}]} Thread-32390::DEBUG::2013-09-20 13:16:35,113::task::579::TaskManager.Task::(_updateState) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::moving from state preparing -> state finished Thread-32390::DEBUG::2013-09-20 13:16:35,113::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32390::DEBUG::2013-09-20 13:16:35,114::task::974::TaskManager.Task::(_decref) Task=`089c5f71-9cbb-4626-8f2d-3ed3547a98cd`::ref 0 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,148::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34] Thread-32393::DEBUG::2013-09-20 13:16:35,148::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state init -> state preparing Thread-32393::INFO::2013-09-20 13:16:35,148::logUtils::44::dispatcher::(wrapper) Run and protect: createStoragePool(poolType=None, spUUID='5849b030-626e-47cb-ad90-3ce782d831b3', poolName='Default', masterDom='17d21ac7-5859-4f25-8de7-2a9433d50c11', domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'], masterVersion=9, lockPolicy=None, lockRenewalIntervalSec=5, leaseTimeSec=60, ioOpTimeoutSec=10, leaseRetries=3, options=None) Thread-32393::INFO::2013-09-20 13:16:35,149::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,161::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '954' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,162::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.5849b030-626e-47cb-ad90-3ce782d831b3`ReqID=`31060ad0-6633-4bbf-a859-b3f0c27af760`::Granted
request Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.5849b030-626e-47cb-ad90-3ce782d831b3 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,163::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::DEBUG::2013-09-20 13:16:35,163::resourceManager::197::ResourceManager.Request::(__init__)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Request
was made in '/usr/share/vdsm/storage/hsm.py' line '956' at 'createStoragePool' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::541::ResourceManager::(registerResource) Trying to register resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' for lock type 'exclusive' Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::600::ResourceManager::(registerResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free. Now locking as 'exclusive' (1 active user) Thread-32393::DEBUG::2013-09-20 13:16:35,164::resourceManager::237::ResourceManager.Request::(grant)
ResName=`Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11`ReqID=`db5f52a0-d455-419c-b8a5-86fc6b695571`::Granted
request Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::811::TaskManager.Task::(resourceAcquired) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_resourcesAcquired: Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11 (exclusive) Thread-32393::DEBUG::2013-09-20 13:16:35,165::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting False Thread-32393::INFO::2013-09-20 13:16:35,166::sp::592::Storage.StoragePool::(create) spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 poolName=Default master_sd=17d21ac7-5859-4f25-8de7-2a9433d50c11 domList=['17d21ac7-5859-4f25-8de7-2a9433d50c11'] masterVersion=9 {'LEASETIMESEC': 60, 'IOOPTIMEOUTSEC': 10, 'LEASERETRIES': 3, 'LOCKRENEWALINTERVALSEC': 5} Thread-32393::INFO::2013-09-20 13:16:35,166::fileSD::315::Storage.StorageDomain::(validate) sdUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11 Thread-32393::DEBUG::2013-09-20 13:16:35,177::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::DEBUG::2013-09-20 13:16:35,188::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=2b07fbc8c65e20eef5180ab785016bde543c6746'] Thread-32393::WARNING::2013-09-20 13:16:35,189::fileUtils::167::Storage.fileUtils::(createdir) Dir /rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3 already exists Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::167::Storage.PersistentDict::(transaction) Starting transaction Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::173::Storage.PersistentDict::(transaction) Flushing changes Thread-32393::DEBUG::2013-09-20 13:16:35,189::persistentDict::299::Storage.PersistentDict::(flush) about to write lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=rep2-virt', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'POOL_UUID=', 'REMOTE_PATH=192.168.1.1:rep2-virt', 'ROLE=Regular', 'SDUUID=17d21ac7-5859-4f25-8de7-2a9433d50c11', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=61b814a65ea3ede1f0ae1d58e139adc06bf9eda5'] Thread-32393::DEBUG::2013-09-20 13:16:35,194::persistentDict::175::Storage.PersistentDict::(transaction) Finished transaction Thread-32393::INFO::2013-09-20 13:16:35,194::clusterlock::174::SANLock::(acquireHostId) Acquiring host id for domain 17d21ac7-5859-4f25-8de7-2a9433d50c11 (id: 250) Thread-32393::ERROR::2013-09-20 13:16:36,196::task::850::TaskManager.Task::(_setError) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 857, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 45, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 960, in createStoragePool masterVersion, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 617, in create self._acquireTemporaryClusterLock(msdUUID, leaseParams) File "/usr/share/vdsm/storage/sp.py", line 559, in _acquireTemporaryClusterLock msd.acquireHostId( self.id ) File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId self._clusterLock.acquireHostId(hostId, async) File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument')) Thread-32393::DEBUG::2013-09-20 13:16:36,196::task::869::TaskManager.Task::(_run) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._run: 72348d40-8442-4dbf-bc66-1d354da5fc31 (None, '5849b030-626e-47cb-ad90-3ce782d831b3', 'Default', '17d21ac7-5859-4f25-8de7-2a9433d50c11', ['17d21ac7-5859-4f25-8de7-2a9433d50c11'], 9, None, 5, 60, 10, 3) {} failed - stopping task Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1194::TaskManager.Task::(stop) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::stopping in state preparing (force False) Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 1 aborting True Thread-32393::INFO::2013-09-20 13:16:36,197::task::1151::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::aborting: Task is aborted: 'Cannot acquire host id' - code 661 Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::1156::TaskManager.Task::(prepare) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Prepare: aborted: Cannot acquire host id Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::974::TaskManager.Task::(_decref) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::ref 0 aborting True Thread-32393::DEBUG::2013-09-20 13:16:36,197::task::909::TaskManager.Task::(_doAbort) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::Task._doAbort: force False Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state preparing -> state aborting Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::534::TaskManager.Task::(__state_aborting) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::_aborting: recover policy none Thread-32393::DEBUG::2013-09-20 13:16:36,198::task::579::TaskManager.Task::(_updateState) Task=`72348d40-8442-4dbf-bc66-1d354da5fc31`::moving from state aborting -> state failed Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11': < ResourceRef 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', isValid: 'True' obj: 'None'>, 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3': < ResourceRef 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', isValid: 'True' obj: 'None'>} Thread-32393::DEBUG::2013-09-20 13:16:36,198::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.17d21ac7-5859-4f25-8de7-2a9433d50c11', Clearing records. Thread-32393::DEBUG::2013-09-20 13:16:36,199::resourceManager::615::ResourceManager::(releaseResource) Trying to release resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::634::ResourceManager::(releaseResource) Released resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' (0 active users) Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::640::ResourceManager::(releaseResource) Resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3' is free, finding out if anyone is waiting for it. Thread-32393::DEBUG::2013-09-20 13:16:36,200::resourceManager::648::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.5849b030-626e-47cb-ad90-3ce782d831b3', Clearing records. Thread-32393::ERROR::2013-09-20 13:16:36,200::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status': {'message': "Cannot acquire host id: ('17d21ac7-5859-4f25-8de7-2a9433d50c11', SanlockException(22, 'Sanlock lockspace add failure', 'Invalid argument'))", 'code': 661}} Thread-32398::DEBUG::2013-09-20 13:16:48,921::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state init -> state preparing Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-32398::INFO::2013-09-20 13:16:48,922::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::1168::TaskManager.Task::(prepare) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::finished: {} Thread-32398::DEBUG::2013-09-20 13:16:48,922::task::579::TaskManager.Task::(_updateState) Task=`a5bce432-622b-499b-a216-d9a1f876e3ca`::moving from state preparing -> state finished
Steve Dainard Infrastructure Manager Miovision | Rethink Traffic 519-513-2407 ex.250 877-646-8476 (toll-free)
Blog | LinkedIn | Twitter | Facebook Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener, ON, Canada | N2C 1L3 This e-mail may contain information that is privileged or confidential. If you are not the intended recipient, please delete the e-mail and any attachments and notify us immediately.
On Fri, Sep 20, 2013 at 12:23 PM, Deepak C Shetty < deepakcs@linux.vnet.ibm.com > wrote:
Either you can use the volume set .. option as mentioned in the wikipage
--or --
If the Gluster volume is added / managed to the oVirt UI.. go to "Volumes" tab, select the gluster volume and
click on "Optimize for virt. store". That should also set the volume options in addition to few other things
thanx,
deepak
______________________________ _________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/ mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (6)
-
Alexander Wels
-
Allon Mureinik
-
Deepak C Shetty
-
Gianluca Cecchi
-
Itamar Heim
-
Steve Dainard