[Users] Issue adding a gluster storage domain.

Yair Zaslavsky yzaslavs at redhat.com
Mon Oct 29 06:46:02 UTC 2012


Looks like the "vers" option is not recognized at
conenctStorageServer.
>From browsing engine code, I see that we potentially send (on NFS storage domain, should not be sent on PosixFS) "protocol_version", but not "vers"

----- Original Message -----
> From: "Daniel Rowe" <daniel.fathom13 at gmail.com>
> To: users at ovirt.org
> Sent: Monday, October 29, 2012 8:16:34 AM
> Subject: [Users] Issue adding a gluster storage domain.
> 
> Hi
> 
> I can't seem to get a gluster storage domain added I am using Fedora
> 17 on both the nodes management machine. I have the gluster volumes
> showing in ovirt and I can manually mount the gluster volume both
> locally on the node and on the management machine.
> 
> If I manually mount the volume on the node with /usr/bin/mount -t
> glusterfs -o vers=3 localhost:/bsdvol1
> /rhev/data-center/mnt/localhost:_bsdvol1 as root then add the domain
> in the web interface it works.
> 
> Although I have the domain done with the manual mounting, I am
> wondering what is going on.
> 
> Thread-1934::INFO::2012-10-29
> 13:27:09,793::logUtils::37::dispatcher::(wrapper) Run and protect:
> validateStorageServerConnection(domType=6,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
> 'connection': 'localhost:/bsdvol1', 'iqn': '', 'portal': '', 'user':
> '', 'vfs_type': 'glusterfs', 'password': '******', 'id':
> '00000000-0000-0000-0000-000000000000'}], options=None)
> Thread-1934::INFO::2012-10-29
> 13:27:09,793::logUtils::39::dispatcher::(wrapper) Run and protect:
> validateStorageServerConnection, Return response: {'statuslist':
> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-1934::DEBUG::2012-10-29
> 13:27:09,793::task::1172::TaskManager.Task::(prepare)
> Task=`4f9131eb-45cd-4e82-bc35-e29ed14f0818`::finished: {'statuslist':
> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-1934::DEBUG::2012-10-29
> 13:27:09,793::task::588::TaskManager.Task::(_updateState)
> Task=`4f9131eb-45cd-4e82-bc35-e29ed14f0818`::moving from state
> preparing -> state finished
> Thread-1934::DEBUG::2012-10-29
> 13:27:09,794::resourceManager::809::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-1934::DEBUG::2012-10-29
> 13:27:09,794::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-1934::DEBUG::2012-10-29
> 13:27:09,794::task::978::TaskManager.Task::(_decref)
> Task=`4f9131eb-45cd-4e82-bc35-e29ed14f0818`::ref 0 aborting False
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,842::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,842::task::588::TaskManager.Task::(_updateState)
> Task=`ef920d52-3082-495f-bfbc-2eb508c3de18`::moving from state init
> ->
> state preparing
> Thread-1935::INFO::2012-10-29
> 13:27:09,843::logUtils::37::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=6,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
> 'connection': 'localhost:/bsdvol1', 'mnt_options': 'vers=3',
> 'portal':
> '', 'user': '', 'iqn': '', 'vfs_type': 'glusterfs', 'password':
> '******', 'id': 'bbe1bbc5-62b8-4115-b409-6ddea910a688'}],
> options=None)
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,851::__init__::1249::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /usr/bin/mount -t glusterfs -o vers=3
> localhost:/bsdvol1 /rhev/data-center/mnt/localhost:_bsdvol1' (cwd
> None)
> Thread-1935::ERROR::2012-10-29
> 13:27:09,930::hsm::1932::Storage.HSM::(connectStorageServer) Could
> not
> connect to storageServer
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/hsm.py", line 1929, in
>   connectStorageServer
>     conObj.connect()
>   File "/usr/share/vdsm/storage/storageServer.py", line 179, in
>   connect
>     self._mount.mount(self.options, self._vfsType)
>   File "/usr/share/vdsm/storage/mount.py", line 190, in mount
>     return self._runcmd(cmd, timeout)
>   File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
>     raise MountError(rc, ";".join((out, err)))
> MountError: (1, 'unknown option vers (ignored)\nMount failed. Please
> check the log file for more details.\n;ERROR: failed to create
> logfile
> "/var/log/glusterfs/rhev-data-center-mnt-localhost:_bsdvol1.log"
> (Permission denied)\nERROR: failed to open logfile
> /var/log/glusterfs/rhev-data-center-mnt-localhost:_bsdvol1.log\n')
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,931::lvm::457::OperationMutex::(_invalidateAllPvs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,932::lvm::459::OperationMutex::(_invalidateAllPvs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,932::lvm::469::OperationMutex::(_invalidateAllVgs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,932::lvm::471::OperationMutex::(_invalidateAllVgs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,933::lvm::490::OperationMutex::(_invalidateAllLvs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,933::lvm::492::OperationMutex::(_invalidateAllLvs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1935::INFO::2012-10-29
> 13:27:09,934::logUtils::39::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status':
> 477,
> 'id': 'bbe1bbc5-62b8-4115-b409-6ddea910a688'}]}
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,934::task::1172::TaskManager.Task::(prepare)
> Task=`ef920d52-3082-495f-bfbc-2eb508c3de18`::finished: {'statuslist':
> [{'status': 477, 'id': 'bbe1bbc5-62b8-4115-b409-6ddea910a688'}]}
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,934::task::588::TaskManager.Task::(_updateState)
> Task=`ef920d52-3082-495f-bfbc-2eb508c3de18`::moving from state
> preparing -> state finished
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,934::resourceManager::809::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,935::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-1935::DEBUG::2012-10-29
> 13:27:09,935::task::978::TaskManager.Task::(_decref)
> Task=`ef920d52-3082-495f-bfbc-2eb508c3de18`::ref 0 aborting False
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,073::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,073::task::588::TaskManager.Task::(_updateState)
> Task=`322f7d1b-0e8d-458b-baf2-5c7f8c5f4303`::moving from state init
> ->
> state preparing
> Thread-1937::INFO::2012-10-29
> 13:27:10,073::logUtils::37::dispatcher::(wrapper) Run and protect:
> createStorageDomain(storageType=6,
> sdUUID='d8591d07-1611-46cc-9122-fc7badea0f4c',
> domainName='DCS_Bis_Sys_gluster',
> typeSpecificArg='localhost:/bsdvol1', domClass=1, domVersion='0',
> options=None)
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,074::misc::1053::SamplingMethod::(__call__) Trying to enter
> sampling method (storage.sdc.refreshStorage)
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,074::misc::1055::SamplingMethod::(__call__) Got in to
> sampling method
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,074::misc::1053::SamplingMethod::(__call__) Trying to enter
> sampling method (storage.iscsi.rescan)
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,074::misc::1055::SamplingMethod::(__call__) Got in to
> sampling method
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,075::__init__::1249::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,092::__init__::1249::Storage.Misc.excCmd::(_log) FAILED:
> <err> = 'iscsiadm: No session found.\n'; <rc> = 21
> Thread-1937::DEBUG::2012-10-29
> 13:27:10,092::misc::1063::SamplingMethod::(__call__) Returning last
> result
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,382::__init__::1249::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/multipath' (cwd None)
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,414::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS:
> <err> = ''; <rc> = 0
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,415::lvm::457::OperationMutex::(_invalidateAllPvs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,415::lvm::459::OperationMutex::(_invalidateAllPvs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,415::lvm::469::OperationMutex::(_invalidateAllVgs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,416::lvm::471::OperationMutex::(_invalidateAllVgs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,416::lvm::490::OperationMutex::(_invalidateAllLvs) Operation
> 'lvm invalidate operation' got the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,416::lvm::492::OperationMutex::(_invalidateAllLvs) Operation
> 'lvm invalidate operation' released the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,416::misc::1063::SamplingMethod::(__call__) Returning last
> result
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,417::lvm::349::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,419::__init__::1249::Storage.Misc.excCmd::(_log)
> '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names
> =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%36848f690e0b7860017e62e4257089a6e|36848f690e0b78600181b77200a8142b8%\\",
> \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1
> wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> d8591d07-1611-46cc-9122-fc7badea0f4c' (cwd None)
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,536::__init__::1249::Storage.Misc.excCmd::(_log) FAILED:
> <err> = '  Volume group "d8591d07-1611-46cc-9122-fc7badea0f4c" not
> found\n'; <rc> = 5
> Thread-1937::WARNING::2012-10-29
> 13:27:15,538::lvm::353::Storage.LVM::(_reloadvgs) lvm vgs failed: 5
> []
> ['  Volume group "d8591d07-1611-46cc-9122-fc7badea0f4c" not found']
> Thread-1937::DEBUG::2012-10-29
> 13:27:15,538::lvm::376::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' released the operation mutex
> Thread-1937::INFO::2012-10-29
> 13:27:15,542::nfsSD::64::Storage.StorageDomain::(create)
> sdUUID=d8591d07-1611-46cc-9122-fc7badea0f4c
> domainName=DCS_Bis_Sys_gluster remotePath=localhost:/bsdvol1
> domClass=1
> Thread-1937::ERROR::2012-10-29
> 13:27:15,550::task::853::TaskManager.Task::(_setError)
> Task=`322f7d1b-0e8d-458b-baf2-5c7f8c5f4303`::Unexpected error
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 861, in _run
>     return fn(*args, **kargs)
>   File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
>     res = f(*args, **kwargs)
>   File "/usr/share/vdsm/storage/hsm.py", line 2136, in
>   createStorageDomain
>     typeSpecificArg, storageType, domVersion)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 75, in create
>     cls._preCreateValidation(sdUUID, mntPoint, remotePath, version)
>   File "/usr/share/vdsm/storage/nfsSD.py", line 44, in
>   _preCreateValidation
>     raise se.StorageDomainFSNotMounted(domPath)
> StorageDomainFSNotMounted: Storage domain remote path not mounted:
> ('/rhev/data-center/mnt/localhost:_bsdvol1',)
> 
> [root at bsdvmhvnode01 mnt]# cat
> /var/log/glusterfs/rhev-data-center-mnt-localhost:_bsdvol1.log
> [2012-10-29 13:27:42.311022] I [glusterfsd.c:1666:main]
> 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version
> 3.3.1
> [2012-10-29 13:27:42.328054] I
> [io-cache.c:1549:check_cache_	users at ovirt.org. size_ok]
> 0-bsdvol1-quick-read: Max cache size is 50636935168
> [2012-10-29 13:27:42.328157] I [io-cache.c:1549:check_cache_size_ok]
> 0-bsdvol1-io-cache: Max cache size is 50636935168
> [2012-10-29 13:27:42.329037] I [client.c:2142:notify]
> 0-bsdvol1-client-0: parent translators are ready, attempting connect
> on transport
> Given volfile:
> +------------------------------------------------------------------------------+
>   1: volume bsdvol1-client-0
>   2:     type protocol/client
>   3:     option remote-host 192.168.1.11
>   4:     option remote-subvolume /data/bsdvol1
>   5:     option transport-type tcp
>   6:     option username ed9b899c-26c1-4d8b-a2f8-ffd2d0b10fed
>   7:     option password 37a8f9ed-e27c-41cb-a049-da6a65042920
>   8: end-volume
>   9:
>  10: volume bsdvol1-write-behind
>  11:     type performance/write-behind
>  12:     subvolumes bsdvol1-client-0
>  13: end-volume
>  14:
>  15: volume bsdvol1-read-ahead
>  16:     type performance/read-ahead
>  17:     subvolumes bsdvol1-write-behind
>  18: end-volume
>  19:
>  20: volume bsdvol1-io-cache
>  21:     type performance/io-cache
>  22:     subvolumes bsdvol1-read-ahead
>  23: end-volume
>  24:
>  25: volume bsdvol1-quick-read
>  26:     type performance/quick-read
>  27:     subvolumes bsdvol1-io-cache
>  28: end-volume
>  29:
>  30: volume bsdvol1-md-cache
>  31:     type performance/md-cache
>  32:     subvolumes bsdvol1-quick-read
>  33: end-volume
>  34:
>  35: volume bsdvol1
>  36:     type debug/io-stats
>  37:     option latency-measurement off
>  38:     option count-fop-hits off
>  39:     subvolumes bsdvol1-md-cache
>  40: end-volume
> 
> +------------------------------------------------------------------------------+
> [2012-10-29 13:27:42.334438] I [rpc-clnt.c:1657:rpc_clnt_reconfig]
> 0-bsdvol1-client-0: changing port to 24009 (from 0)
> [2012-10-29 13:27:46.331768] I
> [client-handshake.c:1636:select_server_supported_programs]
> 0-bsdvol1-client-0: Using Program GlusterFS 3.3.1, Num (1298437),
> Version (330)
> [2012-10-29 13:27:46.332216] I
> [client-handshake.c:1433:client_setvolume_cbk] 0-bsdvol1-client-0:
> Connected to 192.168.1.11:24009, attached to remote volume
> '/data/bsdvol1'.
> [2012-10-29 13:27:46.332249] I
> [client-handshake.c:1445:client_setvolume_cbk] 0-bsdvol1-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
> [2012-10-29 13:27:46.337041] I [fuse-bridge.c:4191:fuse_graph_setup]
> 0-fuse: switched to graph 0
> [2012-10-29 13:27:46.337157] I
> [client-handshake.c:453:client_set_lk_version_cbk]
> 0-bsdvol1-client-0:
> Server lk version = 1
> [2012-10-29 13:27:46.337466] I [fuse-bridge.c:3376:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.13
> kernel 7.20
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 



More information about the Users mailing list