[Users] ISO_DOMAIN won't attach

I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the node (after applying the shmmax workaround discussed in separate thread). I followed the Quick Start Guide. I've added the node, and a storage domain on the node, so the Datacenter is now initialized. However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out. The following messages are are seen in engine.log when the operation is attempted: http://pastebin.com/WguQJFRu I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure why anything is timing out? -Bob P.S. I also note that I have to do a "Shift-Refresh" on the Storage page before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.

On 2013-11-04 18:06, Bob Doolittle wrote:
However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out.
Try putting the oVirt node in maintenance mode, wait, and activate it again. Every once in a while this helped me solve strange problems. Regards - Frank

I appreciate the tip, although I'm concerned about masking over issues we'd all prefer to be uncovered and fixed. If there's any diagnostic info anybody would like me to acquire first, please speak up asap. -Bob On 11/04/2013 12:12 PM, Frank Wall wrote:
On 2013-11-04 18:06, Bob Doolittle wrote:
However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out.
Try putting the oVirt node in maintenance mode, wait, and activate it again. Every once in a while this helped me solve strange problems.
Regards - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 2013-11-04 18:54, Bob Doolittle wrote:
If there's any diagnostic info anybody would like me to acquire first, please speak up asap.
It's always a good idea to keep a copy of /var/log/vdsm/vdsm.log from oVirt node and /var/log/ovirt-engine/engine.log from oVirt engine. Regards - Frank

By wrapping the mount command on the node and recording the mount args, I was able to reproduce the issue manually. Although I can remote mount the iso dir using default options, when I specify -o nfsvers=3 the mount times out, which is the problem. I can do a loopback mount on the engine using nfsvers=3, but I can't do a remote mount from the node. I have SELinux set to 'permissive' on both engine and node. I know I can work around this issue by changing the advanced parameters to specify V4, but would like to understand the real issue first. I've seen the opposite, where v3 works but v4 times out (e.g. if your export isn't part of the root filesystem) but never the opposite like this where v4 works and v3 does not. Any clues? -Bob On 11/04/2013 12:06 PM, Bob Doolittle wrote:
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the node (after applying the shmmax workaround discussed in separate thread). I followed the Quick Start Guide.
I've added the node, and a storage domain on the node, so the Datacenter is now initialized.
However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out.
The following messages are are seen in engine.log when the operation is attempted: http://pastebin.com/WguQJFRu
I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure why anything is timing out?
-Bob
P.S. I also note that I have to do a "Shift-Refresh" on the Storage page before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.

Aha - finally nailed it. Although iptables was disabled on engine, firewalld was not. Once I disabled firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to attach. Is the admin expected to take care of this, or is engine-setup supposed to disable firewalld? -Bob On 11/04/2013 02:52 PM, Bob Doolittle wrote:
By wrapping the mount command on the node and recording the mount args, I was able to reproduce the issue manually.
Although I can remote mount the iso dir using default options, when I specify -o nfsvers=3 the mount times out, which is the problem.
I can do a loopback mount on the engine using nfsvers=3, but I can't do a remote mount from the node. I have SELinux set to 'permissive' on both engine and node.
I know I can work around this issue by changing the advanced parameters to specify V4, but would like to understand the real issue first. I've seen the opposite, where v3 works but v4 times out (e.g. if your export isn't part of the root filesystem) but never the opposite like this where v4 works and v3 does not.
Any clues?
-Bob
On 11/04/2013 12:06 PM, Bob Doolittle wrote:
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the node (after applying the shmmax workaround discussed in separate thread). I followed the Quick Start Guide.
I've added the node, and a storage domain on the node, so the Datacenter is now initialized.
However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out.
The following messages are are seen in engine.log when the operation is attempted: http://pastebin.com/WguQJFRu
I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure why anything is timing out?
-Bob
P.S. I also note that I have to do a "Shift-Refresh" on the Storage page before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.

Il 04/11/2013 20:58, Bob Doolittle ha scritto:
Aha - finally nailed it.
Although iptables was disabled on engine, firewalld was not. Once I disabled firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to attach.
Is the admin expected to take care of this, or is engine-setup supposed to disable firewalld?
While running engine-setup it asks you if you want to configure firewalld if present. If you say no it asks you if you want to configure iptables if present. If you say no you'll need to take care of configuring it manually. If iptables has been choosen to be configured, firewalld is disabled if present. Did you add the host from web interface instead of using all in one plugin? ovirt engine is not able to handle firewalld, see https://bugzilla.redhat.com/show_bug.cgi?id=995362 Bug 995362 - [RFE] Support firewalld.
-Bob
On 11/04/2013 02:52 PM, Bob Doolittle wrote:
By wrapping the mount command on the node and recording the mount args, I was able to reproduce the issue manually.
Although I can remote mount the iso dir using default options, when I specify -o nfsvers=3 the mount times out, which is the problem.
I can do a loopback mount on the engine using nfsvers=3, but I can't do a remote mount from the node. I have SELinux set to 'permissive' on both engine and node.
I know I can work around this issue by changing the advanced parameters to specify V4, but would like to understand the real issue first. I've seen the opposite, where v3 works but v4 times out (e.g. if your export isn't part of the root filesystem) but never the opposite like this where v4 works and v3 does not.
Any clues?
-Bob
On 11/04/2013 12:06 PM, Bob Doolittle wrote:
I have a fresh, default oVirt 3.3 setup, with F19 on the engine and the node (after applying the shmmax workaround discussed in separate thread). I followed the Quick Start Guide.
I've added the node, and a storage domain on the node, so the Datacenter is now initialized.
However, my ISO_DOMAIN remains unattached. If I select it, select its Data Center tab, and click on Attach, it times out.
The following messages are are seen in engine.log when the operation is attempted: http://pastebin.com/WguQJFRu
I can loopback mount the ISO_DOMAIN directory manually, so I'm not sure why anything is timing out?
-Bob
P.S. I also note that I have to do a "Shift-Refresh" on the Storage page before ISO_DOMAIN shows up. This is consistently reproducible. Firefox 25.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 11/05/2013 02:35 AM, Sandro Bonazzola wrote:
Il 04/11/2013 20:58, Bob Doolittle ha scritto:
Aha - finally nailed it.
Although iptables was disabled on engine, firewalld was not. Once I disabled firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to attach.
Is the admin expected to take care of this, or is engine-setup supposed to disable firewalld? While running engine-setup it asks you if you want to configure firewalld if present. If you say no it asks you if you want to configure iptables if present. If you say no you'll need to take care of configuring it manually. If iptables has been choosen to be configured, firewalld is disabled if present.
I simply took the defaults to all questions, which presumably should lead to a working configuration. Does it not?
Did you add the host from web interface instead of using all in one plugin? ovirt engine is not able to handle firewalld, see https://bugzilla.redhat.com/show_bug.cgi?id=995362 Bug 995362 - [RFE] Support firewalld.
Yes - web interface. -Bob

Il 05/11/2013 15:15, Bob Doolittle ha scritto:
On 11/05/2013 02:35 AM, Sandro Bonazzola wrote:
Il 04/11/2013 20:58, Bob Doolittle ha scritto:
Aha - finally nailed it.
Although iptables was disabled on engine, firewalld was not. Once I disabled firewalld, nfsv3 mounts worked fine, and ISO_DOMAIN was able to attach.
Is the admin expected to take care of this, or is engine-setup supposed to disable firewalld? While running engine-setup it asks you if you want to configure firewalld if present. If you say no it asks you if you want to configure iptables if present. If you say no you'll need to take care of configuring it manually. If iptables has been choosen to be configured, firewalld is disabled if present.
I simply took the defaults to all questions, which presumably should lead to a working configuration. Does it not?
Well, yes it should. The point here is that having the host running the engine also as hypervisor should be done during setup using the all-in-one plugin. Adding the host to the engine later is not a supported path yet, due to Bug 995362.
Did you add the host from web interface instead of using all in one plugin? ovirt engine is not able to handle firewalld, see https://bugzilla.redhat.com/show_bug.cgi?id=995362 Bug 995362 - [RFE] Support firewalld.
Yes - web interface.
Ok, that explain how it happened. As workaround as you already discovered, you can just disable firewalld. I'm sorry for the inconvenient
-Bob
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

On 11/05/2013 09:44 AM, Sandro Bonazzola wrote:
Il 05/11/2013 15:15, Bob Doolittle ha scritto:
I simply took the defaults to all questions, which presumably should lead to a working configuration. Does it not? Well, yes it should. The point here is that having the host running the engine also as hypervisor should be done during setup using the all-in-one plugin. Adding the host to the engine later is not a supported path yet, due to Bug 995362.
Did you add the host from web interface instead of using all in one plugin? ovirt engine is not able to handle firewalld, see https://bugzilla.redhat.com/show_bug.cgi?id=995362 Bug 995362 - [RFE] Support firewalld. Yes - web interface. Ok, that explain how it happened. As workaround as you already discovered, you can just disable firewalld. I'm sorry for the inconvenient
Thanks for the explanation. My understanding from the docs is that the all-in-one plugin is unsupported/experimental, so my presumption was that the default options to engine-setup would be tuned to a working configuration in an environment without it. I do however understand an argument that defaults are for naive users and should lead to a simple, working POC configuration, which is the current situation. Regards, Bob
participants (3)
-
Bob Doolittle
-
Frank Wall
-
Sandro Bonazzola