[ovirt-users] Cannot add Posix Storage
shimano
shimano at go2.pl
Fri Apr 3 05:31:22 EDT 2015
Hi guys,
I think I found the source of problem. The oVirt's documentation says:
Any POSIX compliant filesystem used as a storage domain in oVirt *MUST*
support sparse files and direct I/O.
MooseFS is based on FUSE drivers so it doesn't support direct_io.
My question now is - Can I disable require of direct_io in oVirt? If not in
any option, maybe by little changes in source code?
Why would I do that and why I'm not afraid about I/O performance? Because I
use MooseFS via NFS now and it works perfect, so that's why I think that
Direct I/O is not necessary!
2015-04-02 9:47 GMT+02:00 shimano <shimano at go2.pl>:
> Hi everyone...
>
> I have a little strange situation... I'm trying to add Posix Compliant FS
> Storage Domain based on MooseFS. As You can read below, oVirt is mounting
> it correctly but it cannot make a Storage Domain. Anybody could help?
>
>
>
> // Quick investigation
>
> Is /posix mounted?
>
> root at host008:/tmp mount |grep fuse
> root at host008:/tmp
>
> Nope.
> Add Storage Domain via Web Panel with parameters:
>
> Name: MooseFS
> Domain Function / Storage Type: Data / POSIX Compliant FS
> Use Host: HOST008
> Path: mfsmount
> VFS Type: fuse
> Mount Options:
> mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev
>
> Debug logs from task:
>
> JsonRpc (StompReactor)::DEBUG::2015-04-02
> 08:52:58,231::stompReactor::98::Broker.StompAdapter::(handle_frame)
> Handling message <StompFrame command='SEND'>
> JsonRpcServer::DEBUG::2015-04-02
> 08:52:58,232::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> Waiting for request
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,232::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'StoragePool.connectStorageServer' in bridge with {'connectionParams':
> [{'password': '', 'id': '00000000-0000-0000-0000-000000000000',
> 'connection': 'mfsmount', 'mnt_options':
> 'mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev', 'user': '',
> 'tpgt': '1', 'vfs_type': 'fuse', 'iqn': '', 'port': ''}], 'storagepoolID':
> '00000000-0000-0000-0000-000000000000', 'domainType': 6}
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,234::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::moving from state init ->
> state preparing
> Thread-549209::INFO::2015-04-02
> 08:52:58,234::logUtils::44::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=6,
> spUUID='00000000-0000-0000-0000-000000000000', conList=[{'iqn': '', 'port':
> '', 'connection': 'mfsmount', 'mnt_options':
> 'mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev', 'user': '',
> 'tpgt': '1', 'vfs_type': 'fuse', 'password': '******', 'id':
> '00000000-0000-0000-0000-000000000000'}], options=None)
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,237::fileUtils::142::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/mfsmount
> Thread-549209::WARNING::2015-04-02
> 08:52:58,237::fileUtils::149::Storage.fileUtils::(createdir) Dir
> /rhev/data-center/mnt/mfsmount already exists
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,238::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n
> /bin/mount -t fuse -o
> mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount
> /rhev/data-center/mnt/mfsmount (cwd None)
> JsonRpc (StompReactor)::DEBUG::2015-04-02
> 08:52:58,271::stompReactor::98::Broker.StompAdapter::(handle_frame)
> Handling message <StompFrame command='SEND'>
> JsonRpcServer::DEBUG::2015-04-02
> 08:52:58,273::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> Waiting for request
> Thread-549210::DEBUG::2015-04-02
> 08:52:58,276::stompReactor::163::yajsonrpc.StompServer::(send) Sending
> response
> JsonRpc (StompReactor)::DEBUG::2015-04-02
> 08:52:58,279::stompReactor::98::Broker.StompAdapter::(handle_frame)
> Handling message <StompFrame command='SEND'>
> JsonRpcServer::DEBUG::2015-04-02
> 08:52:58,280::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> Waiting for request
> Thread-549211::DEBUG::2015-04-02
> 08:52:58,282::stompReactor::163::yajsonrpc.StompServer::(send) Sending
> response
> Thread-549209::ERROR::2015-04-02
> 08:52:58,523::hsm::2424::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/hsm.py", line 2421, in
> connectStorageServer
> conObj.connect()
> File "/usr/share/vdsm/storage/storageServer.py", line 222, in connect
> self.getMountObj().getRecord().fs_file)
> File "/usr/share/vdsm/storage/mount.py", line 278, in getRecord
> (self.fs_spec, self.fs_file))
> OSError: [Errno 2] Mount of `mfsmount` at
> `/rhev/data-center/mnt/mfsmount` does not exist
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,524::hsm::2443::Storage.HSM::(connectStorageServer) knownSDs:
> {2df46204-217e-416f-a072-ab8ef17cd8d2: storage.nfsSD.findDomain,
> 316c3e1c-4e61-4b0a-b2f6-63cc22d3ab25: storage.nfsSD.findDomain,
> 6c348f77-bb02-4135-b629-2c9cacb0b85c: storage.nfsSD.findDomain,
> 00697722-a1ce-4911-a84c-c4688e5076fe: storage.nfsSD.findDomain,
> 6ac038d7-969d-45b5-be5f-c58a66a78a90: storage.nfsSD.findDomain}
> Thread-549209::INFO::2015-04-02
> 08:52:58,524::logUtils::47::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 100,
> 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,525::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::finished: {'statuslist':
> [{'status': 100, 'id': '00000000-0000-0000-0000-000000000000'}]}
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,525::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::moving from state preparing ->
> state finished
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,526::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,526::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,526::task::993::Storage.TaskManager.Task::(_decref)
> Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::ref 0 aborting False
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,527::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return
> 'StoragePool.connectStorageServer' in bridge with [{'status': 100, 'id':
> '00000000-0000-0000-0000-000000000000'}]
> Thread-549209::DEBUG::2015-04-02
> 08:52:58,527::stompReactor::163::yajsonrpc.StompServer::(send) Sending
> response
> Thread-29::DEBUG::2015-04-02
> 08:52:59,236::domainMonitor::209::Storage.DomainMonitorThread::(_monitorDomain)
> Refreshing domain 316c3e1c-4e61-4b0a-b2f6-63cc22d3ab25
>
> Really? Incorrect mount options? Trying mount from hand... But first,
> check for mounts:
>
> root at host008:/tmp mount |grep fuse
> mfsmaster:9421 on /rhev/data-center/mnt/mfsmount type fuse.mfs
> (rw,allow_other)
>
> Oh! It is already mounted! So oVirt mounted it correctly, but cannot use
> mountpoint? Hm, permissions?
>
> root at host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/
> total 4
> drwxr-xr-x 2 vdsm kvm 0 Apr 1 14:57 .
> drwxr-xr-x 8 vdsm kvm 4096 Apr 2 08:43 ..
>
> Looks fine... Ok, try to unmount and mount manually.
>
> root at host008:/tmp umount /rhev/data-center/mnt/mfsmount/
> root at host008:/tmp mount |grep fuse
> root at host008:/tmp mount -t fuse -o
> mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount
> /rhev/data-center/mnt/mfsmount
> mfsmaster accepted connection with parameters:
> read-write,restricted_ip ; root mapped to root:root
> root at host008:/tmp mount |grep fuse
> mfsmaster:9421 on /rhev/data-center/mnt/mfsmount type fuse.mfs
> (rw,allow_other)
> root at host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/
> total 4
> drwxr-xr-x 2 vdsm kvm 0 Apr 1 14:57 .
> drwxr-xr-x 8 vdsm kvm 4096 Apr 2 08:43 ..
>
> Ok, but I'm root. Try to do it as vdsm user.
>
> root at host008:/tmp cat /etc/passwd /etc/group |grep :36:
> vdsm:x:36:36:Node Virtualization Manager:/var/lib/vdsm:/bin/bash
> kvm:x:36:qemu,sanlock
> root at host008:/tmp umount /rhev/data-center/mnt/mfsmount/
> root at host008:/tmp su vdsm
> vdsm at host008:/tmp /usr/bin/sudo -n /bin/mount -t fuse -o
> mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount
> /rhev/data-center/mnt/mfsmount
> mfsmaster accepted connection with parameters:
> read-write,restricted_ip ; root mapped to root:root
> vdsm at host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/
> total 4
> drwxr-xr-x 2 vdsm kvm 0 Apr 1 14:57 .
> drwxr-xr-x 8 vdsm kvm 4096 Apr 2 08:43 ..
> vdsm at host008:/tmp mkdir
> /rhev/data-center/mnt/mfsmount/ovirt-storage-test
> vdsm at host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/
> total 4
> drwxr-xr-x 3 vdsm kvm 0 Apr 2 2015 .
> drwxr-xr-x 8 vdsm kvm 4096 Apr 2 08:43 ..
> drwxr-xr-x 2 vdsm kvm 0 Apr 2 2015 ovirt-storage-test
>
> // End of quick investigation
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150403/c6790adc/attachment.html>
More information about the Users
mailing list