Hi Itamar
in my case Alon pointed to the correct issue, that the loopback interface was used.
Depending on Marcos interface config this could be the case as well. This should be
visible in the ovirt-host-deploy log.
Georg
Am 20.04.2013 um 22:45 schrieb Itamar Heim <iheim(a)redhat.com>:
On 03/28/2013 02:24 PM, Georg Troxler wrote:
> In ovirt 3.2 with the AllInOne plugin there seems to be an error when I
> add new Export/NFS Domain:
>
> Thread-34494::DEBUG::2013-03-28
> 12:13:04,242::task::568::TaskManager.Task::(_updateState)
> Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::moving from state init ->
> state preparing
> Thread-34494::INFO::2013-03-28
> 12:13:04,242::logUtils::41::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=1,
> spUUID='00000000-0000-0000-0000-000000000000',
conList=[{'connection':
> '192.168.10.105:/mnt/datasstore/vm-storage/export', 'iqn':
'', 'portal':
> '', 'user': '', 'password': '******',
'id':
> '00000000-0000-0000-0000-000000000000', 'port': ''}],
options=None)
> Thread-34494::DEBUG::2013-03-28
> 12:13:04,247::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
> -n /usr/bin/mount -t nfs -o
> soft,nosharecache,timeo=600,retrans=6,nfsvers=3
> 192.168.10.105:/mnt/datasstore/vm-storage/export
> /rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export'
> (cwd None)
> Thread-34494::ERROR::2013-03-28
> 12:13:04,321::hsm::2215::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/hsm.py", line 2211, in
> connectStorageServer
> conObj.connect()
> File "/usr/share/vdsm/storage/storageServer.py", line 302, in connect
> return self._mountCon.connect()
> File "/usr/share/vdsm/storage/storageServer.py", line 208, in connect
> fileSD.validateDirAccess(self.getMountObj().getRecord().fs_file)
> File "/usr/share/vdsm/storage/mount.py", line 260, in getRecord
> (self.fs_spec, self.fs_file))
> OSError: [Errno 2] Mount of
> `192.168.10.105:/mnt/datasstore/vm-storage/export` at
> `/rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export`
> does not exist
> Thread-34494::INFO::2013-03-28
> 12:13:04,323::logUtils::44::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status':
100,
> 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-34494::DEBUG::2013-03-28
> 12:13:04,323::task::1151::TaskManager.Task::(prepare)
> Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::finished: {'statuslist':
> [{'status': 100, 'id':
'00000000-0000-0000-0000-000000000000'}]}
> Thread-34494::DEBUG::2013-03-28
> 12:13:04,324::task::568::TaskManager.Task::(_updateState)
> Task=`a52ff57d-9d92-4c96-aaf5-98e3f4df4456`::moving from state preparing
> -> state finished
>
> I used the following parameters:
>
> Name: export-domain
> Data Center: local_datacenter
> Domain Function / Storage Type: Export / NFS
> Use Host: local_host
> Export Path: 192.168.10.105:/mnt/datasstore/vm-storage/export
>
> It looks as if the subsystem expects a mount point at
> '/rhev/data-center/mnt/192.168.10.105:_mnt_datasstore_vm-storage_export'. If
> I create the mount-point manually, change owner and permission the
> operation still fails.
>
> Unfortunately I could not find specific information regarding the
> creation of an Export store other than the documentation on how to
> create an NFS store. Maybe I am missing something?
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
Georg - can you please try and collabroate on this bug in same area?
Bug 918742 - Ovirt 3.2 all in one problem install in Fedora 18