[Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error

Alexander Wels awels at redhat.com
Thu Sep 19 17:10:17 UTC 2013


Steve,

Having just installed gluster on my local hosts and seeing the exact same error in my 
setup. I am going to assume the following are true:

1. You made a partition just for gluster.
2. You followed oVirt 3.3, Glusterized article from Jason Brooks.

I got the exact same error because for some reason the owner of the directory I put 
the gluster bricks in keep changing back to root instead of kvm:kvm. Each time I 
reboot my host, that happens, so I am assuming I didn't set up something correctly. 
But you can solve it by chowning the directory and everything will work again.

If that doesn't help, well I don't know, I just started using it myself, I just happen to 
have seen the same error at some point.

Alexander

On Thursday, September 19, 2013 12:26:52 PM Steve Dainard wrote:


Hello,


New Ovirt 3.3 install on Fedora 19.


When I try to add a gluster storage domain I get the following:


*UI error:*
/Error while executing action Add Storage Connection: Permission settings on the 
specified path do not allow access to the storage./
/Verify permission settings on the specified storage path./


*VDSM logs contain:*
Thread-393::DEBUG::2013-09-19 11:59:42,399::BindingXMLRPC::177::vds::(wrapper) 
client [10.0.0.34]


Thread-393::DEBUG::2013-09-19 11:59:42,399::task::579::TaskManager.Task::
(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state 
init -> state preparing
Thread-393::INFO::2013-09-19 11:59:42,400::logUtils::44::dispatcher::(wrapper) Run 
and protect: connectStorageServer(domType=7, 
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '', 'connection': 
'192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user': '', 'vfs_type': 'glusterfs', 'password': 
'******', 'id': '00000000-0000-0000-0000-000000000000'}], options=None)
Thread-393::DEBUG::2013-09-19 11:59:42,405::mount::226::Storage.Misc.excCmd::
(_runcmd) '/usr/bin/sudo -n /usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt 
/rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
Thread-393::DEBUG::2013-09-19 11:59:42,490::mount::226::Storage.Misc.excCmd::
(_runcmd) '/usr/bin/sudo -n /usr/bin/umount -f -l /rhev/data-
center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
Thread-393::ERROR::2013-09-19 11:59:42,505::hsm::2382::Storage.HSM::
(connectStorageServer) Could not connect to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2379, in connectStorageServer
    conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect
    raise e
StorageServerAccessPermissionError: Permission settings on the specified path do 
not allow access to the storage. Verify permission settings on the specified storage 
path.: 'path = /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
Thread-393::DEBUG::2013-09-19 11:59:42,506::hsm::2396::Storage.HSM::
(connectStorageServer) knownSDs: {}
Thread-393::INFO::2013-09-19 11:59:42,506::logUtils::47::dispatcher::(wrapper) Run 
and protect: connectStorageServer, Return response: {'statuslist': [{'status': 469, 'id': 
'00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19 11:59:42,506::task::1168::TaskManager.Task::
(prepare) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist': 
[{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19 11:59:42,506::task::579::TaskManager.Task::
(_updateState) Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state 
preparing -> state finished
Thread-393::DEBUG::2013-09-19 
11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-393::DEBUG::2013-09-19 
11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-393::DEBUG::2013-09-19 11:59:42,507::task::974::TaskManager.Task::(_decref) 
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False


*Other info:*
- I have two nodes, ovirt001, ovirt002 they are both Fedora 19.
- The gluster bricks are replicated and located on the nodes. (ovirt001:rep2-virt, 
ovirt002:rep2-virt)
- Local directory for the mount, I changed permissions on glusterSD to 777, it was 
755, and there is nothing in that directory:
[root at ovirt001 mnt]# pwd
/rhev/data-center/mnt
[root at ovirt001 mnt]# ll
total 4
drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD


I find it odd that the UUID's listed in the vdsm logs are zero's..


Appreciate any help,




*Steve

*


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130919/e475b0ed/attachment-0001.html>


More information about the Users mailing list