[Users] Hosted-engine runtime issues (3.4 BETA)

Hi, finally I've got the new hosted-engine feature running on RHEL6 using oVirt 3.4 BETA/nightly. I've come across a few issues and wanted to clarify if this is the desired behaviour: 1.) hosted-engine storage domain not visible in GUI The NFS-Storage I've used to install the hosted-engine is not visible in oVirt's Admin Portal. Though it is mounted on my oVirt Node below /rhev/data-center/mnt/. I tried to import this storage domain, but apparently this fails because it's already mounted. Is there any way to make this storage domain visible? 2.) hosted-engine VM device are not visible in GUI The disk and network devices are not visible in the admin portal. Thus I'm unable to change anything. Is this intended? If so, how am I supposed to make changes? 3.) move hosted-engine VM to a different storage Because of all of the above I seem to be unable to move my hosted-engine VM to a different NFS-Storage. How can this be done? Thanks - Frank

On 01/25/2014 03:43 PM, Frank Wall wrote:
Hi,
finally I've got the new hosted-engine feature running on RHEL6 using oVirt 3.4 BETA/nightly. I've come across a few issues and wanted to clarify if this is the desired behaviour:
1.) hosted-engine storage domain not visible in GUI The NFS-Storage I've used to install the hosted-engine is not visible in oVirt's Admin Portal. Though it is mounted on my oVirt Node below /rhev/data-center/mnt/. I tried to import this storage domain, but apparently this fails because it's already mounted. Is there any way to make this storage domain visible?
not yet.
2.) hosted-engine VM device are not visible in GUI The disk and network devices are not visible in the admin portal. Thus I'm unable to change anything. Is this intended? If so, how am I supposed to make changes?
the VM should be visible, the disk/nics - not yet.
3.) move hosted-engine VM to a different storage Because of all of the above I seem to be unable to move my hosted-engine VM to a different NFS-Storage. How can this be done?
not yet (from the webadmin). if you shutdown the engine and fix the config manually, should be doable.
Thanks - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Itamar, On Sat, Jan 25, 2014 at 11:48:00PM +0200, Itamar Heim wrote:
2.) hosted-engine VM device are not visible in GUI The disk and network devices are not visible in the admin portal. Thus I'm unable to change anything. Is this intended? If so, how am I supposed to make changes?
the VM should be visible, the disk/nics - not yet.
confirmed.
3.) move hosted-engine VM to a different storage Because of all of the above I seem to be unable to move my hosted-engine VM to a different NFS-Storage. How can this be done?
not yet (from the webadmin). if you shutdown the engine and fix the config manually, should be doable.
OK, I'll give this a try. Just to be sure: If I move the hosted-engine VM to a different storage domain from the command line, will this storage domain still be visible from the webadmin? Or will it be hidden, because it runs the hosted-engine? Thanks - Frank

On 01/26/2014 05:25 PM, Frank Wall wrote:
Hi Itamar,
On Sat, Jan 25, 2014 at 11:48:00PM +0200, Itamar Heim wrote:
2.) hosted-engine VM device are not visible in GUI The disk and network devices are not visible in the admin portal. Thus I'm unable to change anything. Is this intended? If so, how am I supposed to make changes?
the VM should be visible, the disk/nics - not yet.
confirmed.
3.) move hosted-engine VM to a different storage Because of all of the above I seem to be unable to move my hosted-engine VM to a different NFS-Storage. How can this be done?
not yet (from the webadmin). if you shutdown the engine and fix the config manually, should be doable.
OK, I'll give this a try. Just to be sure: If I move the hosted-engine VM to a different storage domain from the command line, will this storage domain still be visible from the webadmin? Or will it be hidden, because it runs the hosted-engine?
oh - do *not* move it to a storage domain managed by the engine at this point.

Hi Itamar, On Sun, Jan 26, 2014 at 06:21:03PM +0200, Itamar Heim wrote:
OK, I'll give this a try. Just to be sure: If I move the hosted-engine VM to a different storage domain from the command line, will this storage domain still be visible from the webadmin? Or will it be hidden, because it runs the hosted-engine?
oh - do *not* move it to a storage domain managed by the engine at this point.
oh that was close! I've almost started the migration. Would it be safe to just move the hosted-engine VM to another host (without changing the storage domain)? I thought of copying all files in /etc/ovirt-hosted-engine* to a second ovirt node and starting the VM there... any objections? Anyway, I'm here to test new things once the issues I've mentioned in this thread have been addressed. Thanks - Frank

On 01/26/2014 07:10 PM, Frank Wall wrote:
Hi Itamar,
On Sun, Jan 26, 2014 at 06:21:03PM +0200, Itamar Heim wrote:
OK, I'll give this a try. Just to be sure: If I move the hosted-engine VM to a different storage domain from the command line, will this storage domain still be visible from the webadmin? Or will it be hidden, because it runs the hosted-engine?
oh - do *not* move it to a storage domain managed by the engine at this point.
oh that was close! I've almost started the migration. Would it be safe to just move the hosted-engine VM to another host (without changing the storage domain)?
I thought of copying all files in /etc/ovirt-hosted-engine* to a second ovirt node and starting the VM there... any objections?
Anyway, I'm here to test new things once the issues I've mentioned in this thread have been addressed.
hosted engine supports running on *multiple* nodes, providing out of the box HA between them. the setup should ask you if this is the second host and take care of all the right things to copy. (iirc, copying by yourself without knowing the internal details is risky if the first host could still be used. better to add the 2nd host per procedure, then keep or remove the 1st host)
Thanks - Frank

Hi Itamar, On Sun, Jan 26, 2014 at 07:13:35PM +0200, Itamar Heim wrote:
hosted engine supports running on *multiple* nodes, providing out of the box HA between them. the setup should ask you if this is the second host and take care of all the right things to copy.
thanks for the pointer. The documentation at [1] describes the required steps to setup a second node. Unfortunately it failed for me because my second node was added to the cluster before [2]. Even manual setting the second node to "maintenance" and back to state "up" didn't help, hosted-engine-setup as unable to detect the host's state [3]. Though following the steps was enough to start the hosted-engine VM manually on the second node. This makes a manual failover possible, which is sufficient for now :) Thanks - Frank [1] http://www.ovirt.org/Hosted_Engine_Howto#Installing_additional_nodes [2] 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:437 Cannot add the host to the Default cluster Traceback (most recent call last): File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_host.py", line 431, in _closeup override_iptables=True, File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/brokers.py", line 8679, in add headers={"Expect":expect, "Correlation-Id":correlation_id} File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 88, in add return self.request('POST', url, body, headers) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 118, in request persistent_auth=self._persistent_auth) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 140, in __doRequest persistent_auth=persistent_auth File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line 134, in doRequest raise RequestError, response RequestError: status: 409 reason: Conflict detail: Cannot add Host. Host with the same UUID already exists. 2014-01-26 18:24:59 ERROR otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:444 Cannot automatically add the host to the Default cluster: Cannot add Host. Host with the same UUID already exists. [3] 2014-01-26 18:24:59 INFO otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:200 Waiting for the host to become operational in the engine. This may take several minutes... 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status'

On 01/26/2014 10:38 PM, Frank Wall wrote:
Hi Itamar,
On Sun, Jan 26, 2014 at 07:13:35PM +0200, Itamar Heim wrote:
hosted engine supports running on *multiple* nodes, providing out of the box HA between them. the setup should ask you if this is the second host and take care of all the right things to copy.
thanks for the pointer. The documentation at [1] describes the required steps to setup a second node. Unfortunately it failed for me because my second node was added to the cluster before [2].
why would both your hosts have the same uuid?
Even manual setting the second node to "maintenance" and back to state "up" didn't help, hosted-engine-setup as unable to detect the host's state [3].
you opened or made sure there is a bug for this i hope?
Though following the steps was enough to start the hosted-engine VM manually on the second node. This makes a manual failover possible, which is sufficient for now :)
Thanks - Frank
[1] http://www.ovirt.org/Hosted_Engine_Howto#Installing_additional_nodes
[2] 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:437 Cannot add the host to the Default cluster Traceback (most recent call last): File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_host.py", line 431, in _closeup override_iptables=True, File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/brokers.py", line 8679, in add headers={"Expect":expect, "Correlation-Id":correlation_id} File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 88, in add return self.request('POST', url, body, headers) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 118, in request persistent_auth=self._persistent_auth) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 140, in __doRequest persistent_auth=persistent_auth File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line 134, in doRequest raise RequestError, response RequestError: status: 409 reason: Conflict detail: Cannot add Host. Host with the same UUID already exists. 2014-01-26 18:24:59 ERROR otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:444 Cannot automatically add the host to the Default cluster: Cannot add Host. Host with the same UUID already exists.
[3] 2014-01-26 18:24:59 INFO otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:200 Waiting for the host to become operational in the engine. This may take several minutes... 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status'

On Sun, Jan 26, 2014 at 10:40:18PM +0200, Itamar Heim wrote:
thanks for the pointer. The documentation at [1] describes the required steps to setup a second node. Unfortunately it failed for me because my second node was added to the cluster before [2].
why would both your hosts have the same uuid?
Both hosts have different UUIDs. The second node was already part of the cluster when I ran hosted-engine-setup... these are the steps I took to trigger this error: 1. ran `hosted-engine --deploy` on a new RHEL6 host (node1) 2. manually added a second node through webadmin (node2) 3. ran `hosted-engine --deploy` on node2 to add a second hosted-engine VM
Even manual setting the second node to "maintenance" and back to state "up" didn't help, hosted-engine-setup as unable to detect the host's state [3].
you opened or made sure there is a bug for this i hope?
Not yet, because I thought it might not be supported to run `hosted-engine --deploy` on a node where vdsmd is already running and the node is already part of a oVirt cluster. Regards - Frank

Hi, ----- Original Message -----
From: "Frank Wall" <fw@moov.de> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org, "Doron Fediuck" <dfediuck@redhat.com>, "Oved Ourfalli" <oourfali@redhat.com>, "Sandro Bonazzola" <sbonazzo@redhat.com>, "Yedidyah Bar David" <didi@redhat.com> Sent: Sunday, January 26, 2014 10:38:11 PM Subject: Re: [Users] Hosted-engine runtime issues (3.4 BETA)
Hi Itamar,
On Sun, Jan 26, 2014 at 07:13:35PM +0200, Itamar Heim wrote:
hosted engine supports running on *multiple* nodes, providing out of the box HA between them. the setup should ask you if this is the second host and take care of all the right things to copy.
thanks for the pointer. The documentation at [1] describes the required steps to setup a second node. Unfortunately it failed for me because my second node was added to the cluster before [2].
Indeed. Might it be possible to not just move to maintenance but remove it from the engine? Then you can try again deploy as a second host.
Even manual setting the second node to "maintenance" and back to state "up" didn't help, hosted-engine-setup as unable to detect the host's state [3].
This is obviously a bug, as Itamar suggested, but I do not think it will be solved for 3.4, except for perhaps a cleaner error message.
Though following the steps was enough to start the hosted-engine VM manually on the second node. This makes a manual failover possible, which is sufficient for now :)
Well, this is also an unplanned, unsupported configuration. Might work for you - I don't know about anyone that tried that - but if possible, it's really better if you can add another host by hosted-engine --deploy.
Thanks - Frank
[1] http://www.ovirt.org/Hosted_Engine_Howto#Installing_additional_nodes
[2] 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:437 Cannot add the host to the Default cluster Traceback (most recent call last): File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/engine/add_host.py", line 431, in _closeup override_iptables=True, File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/brokers.py", line 8679, in add headers={"Expect":expect, "Correlation-Id":correlation_id} File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 88, in add return self.request('POST', url, body, headers) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 118, in request persistent_auth=self._persistent_auth) File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 140, in __doRequest persistent_auth=persistent_auth File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line 134, in doRequest raise RequestError, response RequestError: status: 409 reason: Conflict detail: Cannot add Host. Host with the same UUID already exists. 2014-01-26 18:24:59 ERROR otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._closeup:444 Cannot automatically add the host to the Default cluster: Cannot add Host. Host with the same UUID already exists.
[3] 2014-01-26 18:24:59 INFO otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:200 Waiting for the host to become operational in the engine. This may take several minutes... 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:24:59 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status' 2014-01-26 18:25:00 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:229 VDSM host in state 2014-01-26 18:25:01 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:213 Error fetching host state: 'NoneType' object has no attribute 'status'
Thanks for the report and best regards, -- Didi

On Sun, Jan 26, 2014 at 03:54:34PM -0500, Yedidyah Bar David wrote:
Indeed. Might it be possible to not just move to maintenance but remove it from the engine? Then you can try again deploy as a second host.
I'll give it a spin. The host is scheduled for reinstallation tomorrow.
Well, this is also an unplanned, unsupported configuration. Might work for you - I don't know about anyone that tried that - but if possible, it's really better if you can add another host by hosted-engine --deploy.
I was expecting this answer :-) That's OK for me, I just wanted to try it out before reinstalling the host. Thanks - Frank

----- Original Message -----
From: "Frank Wall" <fw@moov.de> To: "Yedidyah Bar David" <didi@redhat.com> Cc: "Itamar Heim" <iheim@redhat.com>, users@ovirt.org, "Doron Fediuck" <dfediuck@redhat.com>, "Oved Ourfalli" <oourfali@redhat.com>, "Sandro Bonazzola" <sbonazzo@redhat.com> Sent: Sunday, January 26, 2014 11:04:23 PM Subject: Re: [Users] Hosted-engine runtime issues (3.4 BETA)
On Sun, Jan 26, 2014 at 03:54:34PM -0500, Yedidyah Bar David wrote:
Indeed. Might it be possible to not just move to maintenance but remove it from the engine? Then you can try again deploy as a second host.
I'll give it a spin. The host is scheduled for reinstallation tomorrow.
Good luck. Please report about any results/issues. If it fails, you can try first, after removing from the engine, # yum remove libvirt vdsm # rm -rf /etc/vdsm /etc/libvirt (or, of course, just reinstall if you do not care).
Well, this is also an unplanned, unsupported configuration. Might work for you - I don't know about anyone that tried that - but if possible, it's really better if you can add another host by hosted-engine --deploy.
I was expecting this answer :-) That's OK for me, I just wanted to try it out before reinstalling the host.
That's just fine, of course. And it's nice to know that it worked :-)
Thanks - Frank
Thanks, -- Didi

On Sun, Jan 26, 2014 at 04:12:05PM -0500, Yedidyah Bar David wrote:
Please report about any results/issues. If it fails, you can try first, after removing from the engine, # yum remove libvirt vdsm # rm -rf /etc/vdsm /etc/libvirt
(or, of course, just reinstall if you do not care).
I reinstalled the host and run `engine-setup --deploy` again. This time adding the second host worked OK. Though I had to workaround bugs BZ 1055153 and BZ 1055059 - I know you are already aware of them :-) https://bugzilla.redhat.com/show_bug.cgi?id=1055153 (vdsmd not starting on first run since vdsm logs are not included in rpm) https://bugzilla.redhat.com/show_bug.cgi?id=1055059 (The --vm-start function does not call the createvm command but --vm-start-paused does) So adding a second host basically works when using a fresh install. Regards - Frank

----- Original Message -----
From: "Frank Wall" <fw@moov.de> To: "Yedidyah Bar David" <didi@redhat.com> Cc: "Itamar Heim" <iheim@redhat.com>, users@ovirt.org, "Doron Fediuck" <dfediuck@redhat.com>, "Oved Ourfalli" <oourfali@redhat.com>, "Sandro Bonazzola" <sbonazzo@redhat.com> Sent: Tuesday, January 28, 2014 12:02:49 AM Subject: Re: [Users] Hosted-engine runtime issues (3.4 BETA)
On Sun, Jan 26, 2014 at 04:12:05PM -0500, Yedidyah Bar David wrote:
Please report about any results/issues. If it fails, you can try first, after removing from the engine, # yum remove libvirt vdsm # rm -rf /etc/vdsm /etc/libvirt
(or, of course, just reinstall if you do not care).
I reinstalled the host and run `engine-setup --deploy` again. This time adding the second host worked OK. Though I had to workaround bugs BZ 1055153 and BZ 1055059 - I know you are already aware of them :-) https://bugzilla.redhat.com/show_bug.cgi?id=1055153 (vdsmd not starting on first run since vdsm logs are not included in rpm) https://bugzilla.redhat.com/show_bug.cgi?id=1055059 (The --vm-start function does not call the createvm command but --vm-start-paused does)
So adding a second host basically works when using a fresh install.
Very well. Thanks for the report! -- Didi
participants (3)
-
Frank Wall
-
Itamar Heim
-
Yedidyah Bar David