[Users] cannot add gluster domain

Alex Leonhardt alex.tuxx at gmail.com
Wed Jan 23 08:21:34 UTC 2013


Hi all,

Am not too familiar with fedora and its services, anyone can help him?

Alex
 On Jan 23, 2013 5:02 AM, "T-Sinjon" <tscbj1989 at gmail.com> wrote:

> I have forced v3 in my /etc/nfsmount and there's no firewall between NFS
> server and the host.
>
> The only problem is no rpc.statd running . Could you tell me how can i
> start it since there's no rpcbind installed on overt node 2.5.5-0.1?
>
> [root at ovirtnode1 ~]# systemctl status nfs-lock.service
> nfs-lock.service - NFS file locking service.
>   Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled)
>   Active: failed (Result: exit-code) since Thu, 17 Jan 2013 09:41:45
> +0000; 5 days ago
>   CGroup: name=systemd:/system/nfs-lock.service
>
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Version 1.2.6
> starting
> Jan 17 09:41:45 localhost.localdomain rpc.statd[1385]: Initializing NSM
> state
> [root at ovirtnode1 ~]# systemctl start nfs-lock.service
> Failed to issue method call: Unit rpcbind.service failed to load: No such
> file or directory. See system logs and 'systemctl status rpcbind.service'
> for details.
>
> On 22 Jan, 2013, at 6:14 PM, Alex Leonhardt <alex.tuxx at gmail.com> wrote:
>
> Hi, this seems to look like the error you're getting :
>
> MountError: (32, ";mount.nfs: rpc.statd is not running but is required for
> remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or
> start statd.\nmount.nfs: an incorrect mount option was specified\n")
>
> Are you running nfs3 on that host ? if yes, have you forced v3 ? is
> rpc.statd running ? is the NFS server firewalling off the rpc.* ports ?
>
> alex
>
>
> On 22 January 2013 09:58, T-Sinjon <tscbj1989 at gmail.com> wrote:
>
>> HI, everyone:
>>         Recently , I newly installed ovirt 3.1 from
>> http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
>>         and node use
>> http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso
>>
>>         when i add gluster domain via nfs, mount error occurred,
>>         I have do manually mount action on the node but failed if without
>> -o nolock option:
>>         # /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
>> my-gluster-ip:/gvol02/GlusterDomain
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain
>>         mount.nfs: rpc.statd is not running but is required for remote
>> locking. mount.nfs: Either use '-o nolock' to keep locks local, or start
>> statd. mount.nfs: an incorrect mount option was specified
>>
>>         blow is the vdsm.log from node and engine.log, any help was
>> appreciated :
>>
>> vdsm.log
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,261::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state init ->
>> state preparing
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::37::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection(domType=1,
>> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '******', 'id': '00000000-0000-0000-0000-000000000000', 'port':
>> ''}], options=None)
>> Thread-12717::INFO::2013-01-22
>> 09:19:02,262::logUtils::39::dispatcher::(wrapper) Run and protect:
>> validateStorageServerConnection, Return response: {'statuslist':
>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::1172::TaskManager.Task::(prepare)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::finished: {'statuslist':
>> [{'status': 0, 'id': '00000000-0000-0000-0000-000000000000'}]}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::task::588::TaskManager.Task::(_updateState)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::moving from state preparing ->
>> state finished
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::809::ResourceManager.Owner::(releaseAll)
>> Owner.releaseAll requests {} resources {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,262::resourceManager::844::ResourceManager.Owner::(cancelAll)
>> Owner.cancelAll requests {}
>> Thread-12717::DEBUG::2013-01-22
>> 09:19:02,263::task::978::TaskManager.Task::(_decref)
>> Task=`e1d331c6-e191-415f-bc1b-e5047d447494`::ref 0 aborting False
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::BindingXMLRPC::156::vds::(wrapper) [my-engine-ip]
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,307::task::588::TaskManager.Task::(_updateState)
>> Task=`c07a075a-a910-4bc3-9a33-b957d05ea270`::moving from state init ->
>> state preparing
>> Thread-12718::INFO::2013-01-22
>> 09:19:02,307::logUtils::37::dispatcher::(wrapper) Run and protect:
>> connectStorageServer(domType=1,
>> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection':
>> 'my-gluster-ip:/gvol02/GlusterDomain', 'iqn': '', 'portal': '', 'user': '',
>> 'password': '******', 'id': '6463ca53-6c57-45f6-bb5c-45505891cae9', 'port':
>> ''}], options=None)
>> Thread-12718::DEBUG::2013-01-22
>> 09:19:02,467::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo -n
>> /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
>> my-gluster-ip:/gvol02/GlusterDomain
>> /rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain' (cwd None)
>> Thread-12718::ERROR::2013-01-22
>> 09:19:02,486::hsm::1932::Storage.HSM::(connectStorageServer) Could not
>> connect to storageServer
>> Traceback (most recent call last):
>>   File "/usr/share/vdsm/storage/hsm.py", line 1929, in
>> connectStorageServer
>>   File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
>>   File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
>>   File "/usr/share/vdsm/storage/mount.py", line 190, in mount
>>   File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
>> MountError: (32, ";mount.nfs: rpc.statd is not running but is required
>> for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local,
>> or start statd.\nmount.nfs: an incorrect mount option was specified\n")
>>
>> engine.log:
>> 2013-01-22 17:19:20,073 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] START,
>> ValidateStorageServerConnectionVDSCommand(vdsId =
>> 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId =
>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList =
>> [{ id: null, connection: my-gluster-ip:/gvol02/GlusterDomain };]), log id:
>> 303f4753
>> 2013-01-22 17:19:20,095 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.ValidateStorageServerConnectionVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] FINISH,
>> ValidateStorageServerConnectionVDSCommand, return:
>> {00000000-0000-0000-0000-000000000000=0}, log id: 303f4753
>> 2013-01-22 17:19:20,115 INFO
>>  [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] Running command:
>> AddStorageServerConnectionCommand internal: false. Entities affected :  ID:
>> aaa00000-0000-0000-0000-123456789aaa Type: System
>> 2013-01-22 17:19:20,117 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] START,
>> ConnectStorageServerVDSCommand(vdsId =
>> 626e37f4-5ee3-11e2-96fa-0030487c133e, storagePoolId =
>> 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList =
>> [{ id: 6463ca53-6c57-45f6-bb5c-45505891cae9, connection:
>> my-gluster-ip:/gvol02/GlusterDomain };]), log id: 198f3eb4
>> 2013-01-22 17:19:20,323 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>> (ajp--0.0.0.0-8009-7) [25932203] FINISH, ConnectStorageServerVDSCommand,
>> return: {6463ca53-6c57-45f6-bb5c-45505891cae9=477}, log id: 198f3eb4
>> 2013-01-22 17:19:20,325 ERROR
>> [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--0.0.0.0-8009-7)
>> [25932203] The connection with details my-gluster-ip:/gvol02/GlusterDomain
>> failed because of error code 477 and error message is: 477
>> 2013-01-22 17:19:20,415 INFO
>>  [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Running command:
>> AddNFSStorageDomainCommand internal: false. Entities affected :  ID:
>> aaa00000-0000-0000-0000-123456789aaa Type: System
>> 2013-01-22 17:19:20,425 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] START, CreateStorageDomainVDSCommand(vdsId
>> = 626e37f4-5ee3-11e2-96fa-0030487c133e,
>> storageDomain=org.ovirt.engine.core.common.businessentities.storage_domain_static at 8e25c6bc,
>> args=my-gluster-ip:/gvol02/GlusterDomain), log id: 675539c4
>> 2013-01-22 17:19:21,064 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Failed in CreateStorageDomainVDS method
>> 2013-01-22 17:19:21,065 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Error code StorageDomainFSNotMounted and
>> error message VDSGenericException: VDSErrorException: Failed to
>> CreateStorageDomainVDS, error = Storage domain remote path not mounted:
>> ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
>> 2013-01-22 17:19:21,066 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>> (ajp--0.0.0.0-8009-6) [6641b9e1] Command
>> org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand
>> return value
>>  Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>> mStatus                       Class Name:
>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>> mCode                         360
>> mMessage                      Storage domain remote path not mounted:
>> ('/rhev/data-center/mnt/my-gluster-ip:_gvol02_GlusterDomain',)
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
>
> --
>
> | RHCE | Senior Systems Engineer | www.vcore.co | www.vsearchcloud.com |
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130123/1de64826/attachment-0001.html>


More information about the Users mailing list