[Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error

Gianluca Cecchi gianluca.cecchi at gmail.com
Thu Sep 19 17:22:08 UTC 2013


Il giorno 19/set/2013 19:10, "Alexander Wels" <awels at redhat.com> ha scritto:
>
> Steve,
>
>
>
> Having just installed gluster on my local hosts and seeing the exact same
error in my setup. I am going to assume the following are true:
>
>
>
> 1. You made a partition just for gluster.
>
> 2. You followed oVirt 3.3, Glusterized article from Jason Brooks.
>
>
>
> I got the exact same error because for some reason the owner of the
directory I put the gluster bricks in keep changing back to root instead of
kvm:kvm. Each time I reboot my host, that happens, so I am assuming I
didn't set up something correctly. But you can solve it by chowning the
directory and everything will work again.
>
>
>
> If that doesn't help, well I don't know, I just started using it myself,
I just happen to have seen the same error at some point.
>
>
>
> Alexander
>
>
>
> On Thursday, September 19, 2013 12:26:52 PM Steve Dainard wrote:
>
> Hello,
>
>
> New Ovirt 3.3 install on Fedora 19.
>
>
> When I try to add a gluster storage domain I get the following:
>
>
> UI error:
>
> Error while executing action Add Storage Connection: Permission settings
on the specified path do not allow access to the storage.
>
> Verify permission settings on the specified storage path.
>
>
> VDSM logs contain:
>
> Thread-393::DEBUG::2013-09-19
11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
>
> Thread-393::DEBUG::2013-09-19
11:59:42,399::task::579::TaskManager.Task::(_updateState)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init ->
state preparing
>
> Thread-393::INFO::2013-09-19
11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user':
'', 'vfs_type': 'glusterfs', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
>
> Thread-393::DEBUG::2013-09-19
11:59:42,405::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
/usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt
/rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
>
> Thread-393::DEBUG::2013-09-19
11:59:42,490::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
/usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
(cwd None)
>
> Thread-393::ERROR::2013-09-19
11:59:42,505::hsm::2382::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
>
> Traceback (most recent call last):
>
>   File "/usr/share/vdsm/storage/hsm.py", line 2379, in
connectStorageServer
>
>     conObj.connect()
>
>   File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect
>
>     raise e
>
> StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path =
/rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
>
> Thread-393::DEBUG::2013-09-19
11:59:42,506::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {}
>
> Thread-393::INFO::2013-09-19
11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 469,
'id': '00000000-0000-0000-0000-000000000000'}]}
>
> Thread-393::DEBUG::2013-09-19
11:59:42,506::task::1168::TaskManager.Task::(prepare)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist':
[{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}
>
> Thread-393::DEBUG::2013-09-19
11:59:42,506::task::579::TaskManager.Task::(_updateState)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing ->
state finished
>
> Thread-393::DEBUG::2013-09-19
11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
>
> Thread-393::DEBUG::2013-09-19
11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
>
> Thread-393::DEBUG::2013-09-19
11:59:42,507::task::974::TaskManager.Task::(_decref)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False
>
>
> Other info:
>
> - I have two nodes, ovirt001, ovirt002 they are both Fedora 19.
>
> - The gluster bricks are replicated and located on the nodes.
(ovirt001:rep2-virt, ovirt002:rep2-virt)
>
> - Local directory for the mount, I changed permissions on glusterSD to
777, it was 755, and there is nothing in that directory:
>
> [root at ovirt001 mnt]# pwd
>
> /rhev/data-center/mnt
>
> [root at ovirt001 mnt]# ll
>
> total 4
>
> drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD
>
>
> I find it odd that the UUID's listed in the vdsm logs are zero's..
>
>
> Appreciate any help,
>
>
>
> Steve
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
I had a similar problem in gluster configuration for openstack and found
solution in this ovirt page:

http://www.ovirt.org/Features/GlusterFS_Storage_Domain#Setting_up_a_GlusterFS_storage_volume_for_using_it_as_a_storage_domain

In particular:

If the GlusterFS volume was created manually, then ensure the below options
are set on the volume, so that its accessible from oVirtvolume set
<volname> storage.owner-uid=36volume set <volname> storage.owner-gid=36

Check that otherwise each time a node mounts the gluster it will reset
permissions root. ..
Hih
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130919/fc091fb3/attachment-0001.html>


More information about the Users mailing list