[Users] Attach nfs domain to gluster dc

Hi, I've created a new DC in order of been able to create vms on a glusterfs data domain. As ovirt does not allow to share export and iso domains between DCs (nice RFE) I detach those from the actual DC and when I try to attach them to the new DC, I get an error: 2013-12-10 15:19:28,558 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-46) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,577 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-46) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. 2013-12-10 15:19:28,578 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-41) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,587 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-41) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. But if I mount the nfs share manually from the hosts, I have no problems. I've been having issues with nfs since I updated to 3.3.1, I have the nfs shares in one of the hosts of the iscsi domain and the other host complaints about not been ablo to access the domains from time to time. The host where the shares are never complaints. Any hints? Regards,

On Tue, Dec 10, 2013 at 05:10:55PM -0200, Juan Pablo Lorier wrote:
Hi,
I've created a new DC in order of been able to create vms on a glusterfs data domain. As ovirt does not allow to share export and iso domains between DCs (nice RFE) I detach those from the actual DC and when I try to attach them to the new DC, I get an error:
2013-12-10 15:19:28,558 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-46) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,577 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-46) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. 2013-12-10 15:19:28,578 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-41) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,587 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-41) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
But if I mount the nfs share manually from the hosts, I have no problems. I've been having issues with nfs since I updated to 3.3.1, I have the nfs shares in one of the hosts of the iscsi domain and the other host complaints about not been ablo to access the domains from time to time. The host where the shares are never complaints. Any hints? Regards,
Could you provide the exact mount command line, and its error response, from supervdsm/vdsm.log?

Hi Dan, Sorry for the late reply. All I can see is this: Thread-283004::DEBUG::2013-12-12 12:33:59,159::BindingXMLRPC::177::vds::(wrapper) client [192.168.128.79] Thread-283004::DEBUG::2013-12-12 12:33:59,159::task::579::TaskManager.Task::(_updateState) Task=`4cc57677-89b8-4603-8fe3-e51a203439a6`::moving from state init -> state preparing Thread-283004::INFO::2013-12-12 12:33:59,160::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='00000000-0000-0000-0000-000000000000', conList=[{'connection': '192.168.128.81:/home/exports/export', 'iqn': '', 'portal': '', 'user': '', 'password': '******', 'id': 'c033e817-ba60-4d90-b877-157f9a3e4b13', 'port': ''}], options=None) Thread-283004::DEBUG::2013-12-12 12:33:59,163::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 192.168.128.81:/home/exports/export /rhev/data-center/mnt/192.168.128.81:_home_exports_export' (cwd None) Thread-283004::ERROR::2013-12-12 12:36:04,230::storageServer::209::StorageServer.MountConnection::(connect) Mount failed: (32, ';mount.nfs: Connection timed out\n') Thread-283004::ERROR::2013-12-12 12:36:04,231::hsm::2364::Storage.HSM::(connectStorageServer) Could not connect to storageServer Thread-283004::DEBUG::2013-12-12 12:36:04,232::hsm::2383::Storage.HSM::(connectStorageServer) knownSDs: {617f4fb2-f878-41fe-ae28-67b5d8c2ed59: storage.glusterSD.findDomain} Thread-283004::INFO::2013-12-12 12:36:04,232::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 477, 'id': 'c033e817-ba60-4d90-b877-157f9a3e4b13'}]} Thread-283004::DEBUG::2013-12-12 12:36:04,232::task::1168::TaskManager.Task::(prepare) Task=`4cc57677-89b8-4603-8fe3-e51a203439a6`::finished: {'statuslist': [{'status': 477, 'id': 'c033e817-ba60-4d90-b877-157f9a3e4b13'}]} Thread-283004::DEBUG::2013-12-12 12:36:04,232::task::579::TaskManager.Task::(_updateState) Task=`4cc57677-89b8-4603-8fe3-e51a203439a6`::moving from state preparing -> state finished Thread-283004::DEBUG::2013-12-12 12:36:04,233::resourceManager::939::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-283004::DEBUG::2013-12-12 12:36:04,233::resourceManager::976::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-283004::DEBUG::2013-12-12 12:36:04,233::task::974::TaskManager.Task::(_decref) Task=`4cc57677-89b8-4603-8fe3-e51a203439a6`::ref 0 aborting False I've tried to use version 3 in the server to avoid version problem (as centos uses nfsv4 as default, BTW why you restrict to version 3?) but it didn't work, it seems to continue to use v4 Manually: [root@ovirt3 vdsm]# mount -t nfs ovirt2.montecarlotv.com.uy:/home/exports/export /mnt/ [root@ovirt3 vdsm]# mount /dev/mapper/vg_ovirt3-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) /dev/mapper/vg_ovirt3-LogVol03 on /glusterfs type btrfs (rw) /dev/mapper/vg_ovirt3-lv_home on /home type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) ovirt4.montecarlotv.com.uy:/gluster on /rhev/data-center/mnt/glusterSD/ovirt4.montecarlotv.com.uy:_gluster type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) ovirt2.montecarlotv.com.uy:/home/exports/export on /mnt type nfs (rw,vers=4,addr=192.168.128.81,clientaddr=192.168.128.82) but with ver 3: mount -t nfs -o nfsvers=3 ovirt2.montecarlotv.com.uy:/home/exports/export /mnt/ just never connects. Regards, On 11/12/13 10:08, Dan Kenigsberg wrote:
On Tue, Dec 10, 2013 at 05:10:55PM -0200, Juan Pablo Lorier wrote:
Hi,
I've created a new DC in order of been able to create vms on a glusterfs data domain. As ovirt does not allow to share export and iso domains between DCs (nice RFE) I detach those from the actual DC and when I try to attach them to the new DC, I get an error:
2013-12-10 15:19:28,558 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-46) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,577 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-46) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. 2013-12-10 15:19:28,578 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-41) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,587 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-41) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
But if I mount the nfs share manually from the hosts, I have no problems. I've been having issues with nfs since I updated to 3.3.1, I have the nfs shares in one of the hosts of the iscsi domain and the other host complaints about not been ablo to access the domains from time to time. The host where the shares are never complaints. Any hints? Regards, Could you provide the exact mount command line, and its error response, from supervdsm/vdsm.log?

Hi Dan, After reading other posts on this matter in the list these days, I've manually corrected the firewall configuration to allow v3 to work (don't know if everyone is needed): # nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Regards, On 11/12/13 10:08, Dan Kenigsberg wrote:
On Tue, Dec 10, 2013 at 05:10:55PM -0200, Juan Pablo Lorier wrote:
Hi,
I've created a new DC in order of been able to create vms on a glusterfs data domain. As ovirt does not allow to share export and iso domains between DCs (nice RFE) I detach those from the actual DC and when I try to attach them to the new DC, I get an error:
2013-12-10 15:19:28,558 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-46) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,577 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-46) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. 2013-12-10 15:19:28,578 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-41) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,587 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-41) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
But if I mount the nfs share manually from the hosts, I have no problems. I've been having issues with nfs since I updated to 3.3.1, I have the nfs shares in one of the hosts of the iscsi domain and the other host complaints about not been ablo to access the domains from time to time. The host where the shares are never complaints. Any hints? Regards, Could you provide the exact mount command line, and its error response, from supervdsm/vdsm.log?

On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT
The above rules might break after a reboot. Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy.
Are you doing this on a node or on your engine server? The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).

Hi Sander, I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards, On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).

Juan, I apologize for discovering this thread with delay. Let me try to explain. Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it. Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option. Now about your problem: Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually. Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database. I never tried option 2 by myself but it should work. Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration. Sergey ----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Sergey, No need to apologize. I'm used to hear that iso domains need to import the images, that just copying them won't work. I thought export domains were the same, but if you say I can just copy, I'll try that next Friday when I come back to the office. I think this is the easier approach as using the REST would mean that I have to learn to use it and it's not the right time of the year :-). I think that been capable of modifying the options won't heart anybody and looks like harmless, at least when domain is in maintenance, you may say if I'm wrong. Thank you very much for your help and for your disposition. I'll let you know what happens next Friday. Regards, El 16/12/13 20:59, Sergey Gotliv escribió:
Juan,
I apologize for discovering this thread with delay. Let me try to explain.
Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it.
Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option.
Now about your problem:
Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually.
Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database.
I never tried option 2 by myself but it should work.
Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration.
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Juan, Ping me when you doing that, Friday is not a working day but usually I am checking my mails anyway. I tested option #2 (see my previous mail) this morning - I just updated connection details from REST and from database directly both options worked for me. Usually I wouldn't recommend to do that from database but in this particular case this is the quickest possible solution. Copying images between different export domain may be a little bit tricky, I'll try to create a summary for you. When attaching storage domain to the new data center it makes sense to allow editing connection details at least for NFS, because new data center contains new hosts that have different NFS client... Regards, Sergey ----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sergey Gotliv" <sgotliv@redhat.com> Cc: "Sander Grendelman" <sander@grendelman.com>, "users" <users@ovirt.org> Sent: Tuesday, December 17, 2013 1:10:01 AM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sergey,
No need to apologize. I'm used to hear that iso domains need to import the images, that just copying them won't work. I thought export domains were the same, but if you say I can just copy, I'll try that next Friday when I come back to the office. I think this is the easier approach as using the REST would mean that I have to learn to use it and it's not the right time of the year :-). I think that been capable of modifying the options won't heart anybody and looks like harmless, at least when domain is in maintenance, you may say if I'm wrong. Thank you very much for your help and for your disposition. I'll let you know what happens next Friday. Regards,
El 16/12/13 20:59, Sergey Gotliv escribió:
Juan,
I apologize for discovering this thread with delay. Let me try to explain.
Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it.
Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option.
Now about your problem:
Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually.
Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database.
I never tried option 2 by myself but it should work.
Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration.
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 12/17/2013 01:39 AM, Sergey Gotliv wrote:
Juan,
Ping me when you doing that, Friday is not a working day but usually I am checking my mails anyway.
I tested option #2 (see my previous mail) this morning - I just updated connection details from REST and from database directly both options worked for me. Usually I wouldn't recommend to do that from database but in this particular case this is the quickest possible solution.
Copying images between different export domain may be a little bit tricky, I'll try to create a summary for you.
When attaching storage domain to the new data center it makes sense to allow editing connection details at least for NFS, because new data center contains new hosts that have different NFS client...
Juan - you shouldn't have to learn to use the REST API directly. all of it should be available via ovirt-shell - a command line tool (as well as python and java sdk's for programming).
Regards,
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sergey Gotliv" <sgotliv@redhat.com> Cc: "Sander Grendelman" <sander@grendelman.com>, "users" <users@ovirt.org> Sent: Tuesday, December 17, 2013 1:10:01 AM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sergey,
No need to apologize. I'm used to hear that iso domains need to import the images, that just copying them won't work. I thought export domains were the same, but if you say I can just copy, I'll try that next Friday when I come back to the office. I think this is the easier approach as using the REST would mean that I have to learn to use it and it's not the right time of the year :-). I think that been capable of modifying the options won't heart anybody and looks like harmless, at least when domain is in maintenance, you may say if I'm wrong. Thank you very much for your help and for your disposition. I'll let you know what happens next Friday. Regards,
El 16/12/13 20:59, Sergey Gotliv escribió:
Juan,
I apologize for discovering this thread with delay. Let me try to explain.
Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it.
Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option.
Now about your problem:
Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually.
Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database.
I never tried option 2 by myself but it should work.
Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration.
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Thank you both. I'll see that on Friday and let you know the results. Regards, El 17/12/13 05:32, Itamar Heim escribió:
On 12/17/2013 01:39 AM, Sergey Gotliv wrote:
Juan,
Ping me when you doing that, Friday is not a working day but usually I am checking my mails anyway.
I tested option #2 (see my previous mail) this morning - I just updated connection details from REST and from database directly both options worked for me. Usually I wouldn't recommend to do that from database but in this particular case this is the quickest possible solution.
Copying images between different export domain may be a little bit tricky, I'll try to create a summary for you.
When attaching storage domain to the new data center it makes sense to allow editing connection details at least for NFS, because new data center contains new hosts that have different NFS client...
Juan - you shouldn't have to learn to use the REST API directly. all of it should be available via ovirt-shell - a command line tool (as well as python and java sdk's for programming).
Regards,
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sergey Gotliv" <sgotliv@redhat.com> Cc: "Sander Grendelman" <sander@grendelman.com>, "users" <users@ovirt.org> Sent: Tuesday, December 17, 2013 1:10:01 AM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sergey,
No need to apologize. I'm used to hear that iso domains need to import the images, that just copying them won't work. I thought export domains were the same, but if you say I can just copy, I'll try that next Friday when I come back to the office. I think this is the easier approach as using the REST would mean that I have to learn to use it and it's not the right time of the year :-). I think that been capable of modifying the options won't heart anybody and looks like harmless, at least when domain is in maintenance, you may say if I'm wrong. Thank you very much for your help and for your disposition. I'll let you know what happens next Friday. Regards,
El 16/12/13 20:59, Sergey Gotliv escribió:
Juan,
I apologize for discovering this thread with delay. Let me try to explain.
Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it.
Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option.
Now about your problem:
Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually.
Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database.
I never tried option 2 by myself but it should work.
Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration.
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ... > # nfs > -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT > -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT > -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT > -A INPUT -p udp -m udp --dport 2049 -j ACCEPT > -A INPUT -p udp -m udp --dport 41729 -j ACCEPT > -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT > -A INPUT -p udp -m udp --dport 43828 -j ACCEPT > -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT > -A INPUT -p udp -m udp --dport 47492 -j ACCEPT > -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
> Now I'm changing the settings by overriding the defaults in the > domain > and auto negotiating the protocol. This firewall correction may > be a > good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Sergey, Sorry for the delay, but the holydays got in the middle. I've tried copy the images among the domains, but I've got the new domain corrupted (copied more than I should I guess). I'm trying now to learn to use ovirt-shell as Itamar pointed to see if I can get this going. If it make sense to you, I can open an RFE to get the option to modify an existing domain. Regards, On 17/12/13 04:39, Sergey Gotliv wrote:
Juan,
Ping me when you doing that, Friday is not a working day but usually I am checking my mails anyway.
I tested option #2 (see my previous mail) this morning - I just updated connection details from REST and from database directly both options worked for me. Usually I wouldn't recommend to do that from database but in this particular case this is the quickest possible solution.
Copying images between different export domain may be a little bit tricky, I'll try to create a summary for you.
When attaching storage domain to the new data center it makes sense to allow editing connection details at least for NFS, because new data center contains new hosts that have different NFS client...
Regards,
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sergey Gotliv" <sgotliv@redhat.com> Cc: "Sander Grendelman" <sander@grendelman.com>, "users" <users@ovirt.org> Sent: Tuesday, December 17, 2013 1:10:01 AM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sergey,
No need to apologize. I'm used to hear that iso domains need to import the images, that just copying them won't work. I thought export domains were the same, but if you say I can just copy, I'll try that next Friday when I come back to the office. I think this is the easier approach as using the REST would mean that I have to learn to use it and it's not the right time of the year :-). I think that been capable of modifying the options won't heart anybody and looks like harmless, at least when domain is in maintenance, you may say if I'm wrong. Thank you very much for your help and for your disposition. I'll let you know what happens next Friday. Regards,
El 16/12/13 20:59, Sergey Gotliv escribió:
Juan,
I apologize for discovering this thread with delay. Let me try to explain.
Attach option indeed missing a possibility to configure NFS options such as nfsver, retrans and timeo which by the way available when creating a new storage domain (see the Advanced Parameters section). I have to check this flow again and see which storage connection we use in order to connect this storage to the new data center. If this is a bug we'll fix it.
Now about the default NFS version, we are using version 3 because usually customers have a problem to configure NFS server version 4 properly so we allow using version 4 as an advanced option.
Now about your problem:
Option 1: Can you just create another Export Storage Domain from scratch using advanced parameters I mentioned above in order to configure the NFS version and then just copy the content between these domains manually.
Option 2: You can try to create new storage connection with the REST API as explained here: http://www.ovirt.org/Features/Manage_Storage_Connections and then try to re-attach this storage domain again. Or we can just update storage_server_connections table manually in your database.
I never tried option 2 by myself but it should work.
Please, feel free to contact me on my direct mail tomorrow sgotliv@redhat.com and I'll walk you through one of these options and we can check you NFS configuration.
Sergey
----- Original Message -----
From: "Juan Pablo Lorier" <jplorier@gmail.com> To: "Sander Grendelman" <sander@grendelman.com> Cc: "users" <users@ovirt.org> Sent: Friday, December 13, 2013 1:21:08 PM Subject: Re: [Users] Attach nfs domain to gluster dc
Hi Sander,
I'll do the changes to the firewall as you mention. I'm running the domain in a host as my engine is in a vm with low disk resources. I don't know if there's a way to set up the domain automatically with ovirt when it's deployed in a host. Regards,
On 13/12/13 06:00, Sander Grendelman wrote:
On Thu, Dec 12, 2013 at 5:01 PM, Juan Pablo Lorier <jplorier@gmail.com> wrote: ...
# nfs -A INPUT -p tcp -m tcp --dport 111 -j ACCEPT -A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT -A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 2049 -j ACCEPT -A INPUT -p udp -m udp --dport 41729 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 43828 -j ACCEPT -A INPUT -p tcp -m tcp --dport 48491 -j ACCEPT -A INPUT -p udp -m udp --dport 47492 -j ACCEPT -A INPUT -p tcp -m tcp --dport 58837 -j ACCEPT The above rules might break after a reboot.
Best practice is to set the normally dynamic nfs ports to fixed values in /etc/sysconfig/nfs and then open those ports in the firewall.
Now I'm changing the settings by overriding the defaults in the domain and auto negotiating the protocol. This firewall correction may be a good thing to add in the deploy. Are you doing this on a node or on your engine server?
The engine-setup configured both /etc/sysconfig/nfs and iptables for me on my engine server (for the iso domain).
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, I'm finally starting to pis off with ovirt 100 steps and locks. As I can't modify the export domain while it's in maintenance and while it's active I can't change the version, how should I get rid of the v3 limitation on this existing domain? Can someone tell me why ovirt defaults v3 and not 4? is it something to do with existing bugs or limitations? Can you please put as much energy in make things easier as you put in new features? Not a critic but a request. Regards, On 11/12/13 10:08, Dan Kenigsberg wrote:
On Tue, Dec 10, 2013 at 05:10:55PM -0200, Juan Pablo Lorier wrote:
Hi,
I've created a new DC in order of been able to create vms on a glusterfs data domain. As ovirt does not allow to share export and iso domains between DCs (nice RFE) I detach those from the actual DC and when I try to attach them to the new DC, I get an error:
2013-12-10 15:19:28,558 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-46) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,577 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-46) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand. 2013-12-10 15:19:28,578 ERROR [org.ovirt.engine.core.bll.storage.NFSStorageHelper] (pool-6-thread-41) The connection with details 192.168.128.81:/home/exports/export failed because of error code 477 and error message is: problem while trying to mount target 2013-12-10 15:19:28,587 ERROR [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand] (pool-6-thread-41) Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
But if I mount the nfs share manually from the hosts, I have no problems. I've been having issues with nfs since I updated to 3.3.1, I have the nfs shares in one of the hosts of the iscsi domain and the other host complaints about not been ablo to access the domains from time to time. The host where the shares are never complaints. Any hints? Regards, Could you provide the exact mount command line, and its error response, from supervdsm/vdsm.log?

Hi, just wanted to add that this is not completely true: an ISO domain can be attached simultaneously to different DCs afaik. But you are right for export domains. Am 10.12.2013 20:10, schrieb Juan Pablo Lorier:
As ovirt does not allow to share export and iso domains between DCs
-- Mit freundlichen Grüßen / Regards Sven Kieske Systemadministrator Mittwald CM Service GmbH & Co. KG Königsberger Straße 6 32339 Espelkamp T: +49-5772-293-100 F: +49-5772-293-333 https://www.mittwald.de Geschäftsführer: Robert Meyer St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen

On 12/11/2013 02:28 PM, Sven Kieske wrote:
Hi,
just wanted to add that this is not completely true: an ISO domain can be attached simultaneously to different DCs afaik. But you are right for export domains.
also, while the Glance domain doesn't cover the full functionality of an export domain (ability to export just the COW of VMs derived from templates, and export/import snapshot chains), it does allow connecting to multiple DCs/Engine's for export/import of simple VMs.
Am 10.12.2013 20:10, schrieb Juan Pablo Lorier:
As ovirt does not allow to share export and iso domains between DCs
participants (6)
-
Dan Kenigsberg
-
Itamar Heim
-
Juan Pablo Lorier
-
Sander Grendelman
-
Sergey Gotliv
-
Sven Kieske