<div dir="ltr"><div><div><div><div><div>Hi guys,<br><br></div>I think I found the source of problem. The oVirt&#39;s documentation says:<br><br>Any POSIX compliant filesystem used as a storage domain in oVirt <b>MUST</b>
 support sparse files and direct I/O.<br><br></div>MooseFS is based on FUSE drivers so it doesn&#39;t support direct_io.<br></div><br></div>My question now is - Can I disable require of direct_io in oVirt? If not in any option, maybe by little changes in source code?<br></div>Why would I do that and why I&#39;m not afraid about I/O performance? Because I use MooseFS via NFS now and it works perfect, so that&#39;s why I think that Direct I/O is not necessary!<br><br>
</div><div class="gmail_extra"><br><div class="gmail_quote">2015-04-02 9:47 GMT+02:00 shimano <span dir="ltr">&lt;<a href="mailto:shimano@go2.pl" target="_blank">shimano@go2.pl</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi everyone...<br><br></div>I have a little strange situation... I&#39;m trying to add Posix Compliant FS Storage Domain based on MooseFS. As You can read below, oVirt is mounting it correctly but it cannot make a Storage Domain. Anybody could help?<br><br><br><br></div><div>// Quick investigation<br></div><div><br></div>Is /posix mounted?<br><br>    root@host008:/tmp mount |grep fuse<br>    root@host008:/tmp<br><br>Nope.<br>Add Storage Domain via Web Panel with parameters:<br><br>    Name: MooseFS<br>    Domain Function / Storage Type: Data / POSIX Compliant FS<br>    Use Host: HOST008<br>    Path: mfsmount<br>    VFS Type: fuse<br>    Mount Options: mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev<br><br>Debug logs from task:<br><br>    JsonRpc (StompReactor)::DEBUG::2015-04-02 08:52:58,231::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message &lt;StompFrame command=&#39;SEND&#39;&gt;<br>    JsonRpcServer::DEBUG::2015-04-02 08:52:58,232::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,232::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest) Calling &#39;StoragePool.connectStorageServer&#39; in bridge with {&#39;connectionParams&#39;: [{&#39;password&#39;: &#39;&#39;, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;, &#39;connection&#39;: &#39;mfsmount&#39;, &#39;mnt_options&#39;: &#39;mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev&#39;, &#39;user&#39;: &#39;&#39;, &#39;tpgt&#39;: &#39;1&#39;, &#39;vfs_type&#39;: &#39;fuse&#39;, &#39;iqn&#39;: &#39;&#39;, &#39;port&#39;: &#39;&#39;}], &#39;storagepoolID&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;, &#39;domainType&#39;: 6}<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,234::task::595::Storage.TaskManager.Task::(_updateState) Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::moving from state init -&gt; state preparing<br>    Thread-549209::INFO::2015-04-02 08:52:58,234::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=6, spUUID=&#39;00000000-0000-0000-0000-000000000000&#39;, conList=[{&#39;iqn&#39;: &#39;&#39;, &#39;port&#39;: &#39;&#39;, &#39;connection&#39;: &#39;mfsmount&#39;, &#39;mnt_options&#39;: &#39;mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev&#39;, &#39;user&#39;: &#39;&#39;, &#39;tpgt&#39;: &#39;1&#39;, &#39;vfs_type&#39;: &#39;fuse&#39;, &#39;password&#39;: &#39;******&#39;, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;}], options=None)<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,237::fileUtils::142::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/mfsmount<br>    Thread-549209::WARNING::2015-04-02 08:52:58,237::fileUtils::149::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt/mfsmount already exists<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,238::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /bin/mount -t fuse -o mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount /rhev/data-center/mnt/mfsmount (cwd None)<br>    JsonRpc (StompReactor)::DEBUG::2015-04-02 08:52:58,271::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message &lt;StompFrame command=&#39;SEND&#39;&gt;<br>    JsonRpcServer::DEBUG::2015-04-02 08:52:58,273::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br>    Thread-549210::DEBUG::2015-04-02 08:52:58,276::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>    JsonRpc (StompReactor)::DEBUG::2015-04-02 08:52:58,279::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message &lt;StompFrame command=&#39;SEND&#39;&gt;<br>    JsonRpcServer::DEBUG::2015-04-02 08:52:58,280::__init__::506::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request<br>    Thread-549211::DEBUG::2015-04-02 08:52:58,282::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>    Thread-549209::ERROR::2015-04-02 08:52:58,523::hsm::2424::Storage.HSM::(connectStorageServer) Could not connect to storageServer<br>    Traceback (most recent call last):<br>    File &quot;/usr/share/vdsm/storage/hsm.py&quot;, line 2421, in connectStorageServer<br>        conObj.connect()<br>    File &quot;/usr/share/vdsm/storage/storageServer.py&quot;, line 222, in connect<br>        self.getMountObj().getRecord().fs_file)<br>    File &quot;/usr/share/vdsm/storage/mount.py&quot;, line 278, in getRecord<br>        (self.fs_spec, self.fs_file))<br>    OSError: [Errno 2] Mount of `mfsmount` at `/rhev/data-center/mnt/mfsmount` does not exist<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,524::hsm::2443::Storage.HSM::(connectStorageServer) knownSDs: {2df46204-217e-416f-a072-ab8ef17cd8d2: storage.nfsSD.findDomain, 316c3e1c-4e61-4b0a-b2f6-63cc22d3ab25: storage.nfsSD.findDomain, 6c348f77-bb02-4135-b629-2c9cacb0b85c: storage.nfsSD.findDomain, 00697722-a1ce-4911-a84c-c4688e5076fe: storage.nfsSD.findDomain, 6ac038d7-969d-45b5-be5f-c58a66a78a90: storage.nfsSD.findDomain}<br>    Thread-549209::INFO::2015-04-02 08:52:58,524::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {&#39;statuslist&#39;: [{&#39;status&#39;: 100, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;}]}<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,525::task::1191::Storage.TaskManager.Task::(prepare) Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::finished: {&#39;statuslist&#39;: [{&#39;status&#39;: 100, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;}]}<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,525::task::595::Storage.TaskManager.Task::(_updateState) Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::moving from state preparing -&gt; state finished<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,526::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,526::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,526::task::993::Storage.TaskManager.Task::(_decref) Task=`9bb09583-d8f7-4189-b9ab-81b925f8fc13`::ref 0 aborting False<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,527::__init__::500::jsonrpc.JsonRpcServer::(_serveRequest) Return &#39;StoragePool.connectStorageServer&#39; in bridge with [{&#39;status&#39;: 100, &#39;id&#39;: &#39;00000000-0000-0000-0000-000000000000&#39;}]<br>    Thread-549209::DEBUG::2015-04-02 08:52:58,527::stompReactor::163::yajsonrpc.StompServer::(send) Sending response<br>    Thread-29::DEBUG::2015-04-02 08:52:59,236::domainMonitor::209::Storage.DomainMonitorThread::(_monitorDomain) Refreshing domain 316c3e1c-4e61-4b0a-b2f6-63cc22d3ab25<br><br>Really? Incorrect mount options? Trying mount from hand... But first, check for mounts:<br><br>    root@host008:/tmp mount |grep fuse<br>    mfsmaster:9421 on /rhev/data-center/mnt/mfsmount type fuse.mfs (rw,allow_other)<br><br>Oh! It is already mounted! So oVirt mounted it correctly, but cannot use mountpoint? Hm, permissions?<br><br>    root@host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/<br>    total 4<br>    drwxr-xr-x 2 vdsm kvm    0 Apr  1 14:57 .<br>    drwxr-xr-x 8 vdsm kvm 4096 Apr  2 08:43 ..<br><br>Looks fine... Ok, try to unmount and mount manually.<br><br>    root@host008:/tmp umount /rhev/data-center/mnt/mfsmount/<br>    root@host008:/tmp mount |grep fuse<br>    root@host008:/tmp mount -t fuse -o mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount /rhev/data-center/mnt/mfsmount<br>    mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root<br>    root@host008:/tmp mount |grep fuse<br>    mfsmaster:9421 on /rhev/data-center/mnt/mfsmount type fuse.mfs (rw,allow_other)<br>    root@host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/<br>    total 4<br>    drwxr-xr-x 2 vdsm kvm    0 Apr  1 14:57 .<br>    drwxr-xr-x 8 vdsm kvm 4096 Apr  2 08:43 ..<br><br>Ok, but I&#39;m root. Try to do it as vdsm user.<br><br>    root@host008:/tmp cat /etc/passwd /etc/group |grep :36:<br>    vdsm:x:36:36:Node Virtualization Manager:/var/lib/vdsm:/bin/bash<br>    kvm:x:36:qemu,sanlock<br>    root@host008:/tmp umount /rhev/data-center/mnt/mfsmount/<br>    root@host008:/tmp su vdsm<br>    vdsm@host008:/tmp /usr/bin/sudo -n /bin/mount -t fuse -o mfsmaster=mfsmaster,mfsport=9421,mfssubfolder=/posix,_netdev mfsmount /rhev/data-center/mnt/mfsmount<br>    mfsmaster accepted connection with parameters: read-write,restricted_ip ; root mapped to root:root<br>    vdsm@host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/<br>    total 4<br>    drwxr-xr-x 2 vdsm kvm    0 Apr  1 14:57 .<br>    drwxr-xr-x 8 vdsm kvm 4096 Apr  2 08:43 ..<br>    vdsm@host008:/tmp mkdir /rhev/data-center/mnt/mfsmount/ovirt-storage-test<br>    vdsm@host008:/tmp ls -al /rhev/data-center/mnt/mfsmount/<br>    total 4<br>    drwxr-xr-x 3 vdsm kvm    0 Apr  2  2015 .<br>    drwxr-xr-x 8 vdsm kvm 4096 Apr  2 08:43 ..<br>    drwxr-xr-x 2 vdsm kvm    0 Apr  2  2015 ovirt-storage-test<br><br></div>// End of quick investigation<br><br></div>
</blockquote></div><br></div>