InClusterUpgrade Scheduling Policy

Hello list, I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two documents: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... https://access.redhat.com/solutions/2300331 The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error: Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable. But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance). Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used. Thanks for any suggestions/help, Scott

Hi Scott, On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report. But what you can do is the following: You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again. The detailed steps are: 1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode Adding Sandro and Roy to keep me honest. Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Roman, Thanks for the detailed steps. I follow the idea you have outlined and I think its easier than what I thought of (moving my self hosted engine back to physical hardware, upgrading and moving it back to self hosted). I will give it a spin in my build RHEV cluster tomorrow and let you know how I get on. Thanks again, Scott On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat...
https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Thu, Jun 23, 2016 at 10:26 PM, Scott <romracer@gmail.com> wrote:
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined and I think its easier than what I thought of (moving my self hosted engine back to physical hardware, upgrading and moving it back to self hosted). I will give it a spin in my build RHEV cluster tomorrow and let you know how I get on.
Thanks. The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745. I thought about the solution and I see one possible problem with this approach. It might be that the engine still thinks that the VM is on the old cluster. Let me know if this happens, we can work around that too. Roman
Thanks again, Scott
On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Roman, I made it through step 6 however it does look like the problem you mentioned has occurred. My engine VM is running on my host in the temporary cluster. The stats under Hosts show this. But in the Virtual Machines tab this VM still thinks its on my main cluster and I can't change that setting. Did you have a suggestion on how to work around this? Thankfully only one of my RHEV instances has this upgrade path. Thanks for your help, Scott On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr <rmohr@redhat.com> wrote:
On Thu, Jun 23, 2016 at 10:26 PM, Scott <romracer@gmail.com> wrote:
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined and I think its easier than what I thought of (moving my self hosted engine back to physical hardware, upgrading and moving it back to self hosted). I will give it a spin in my build RHEV cluster tomorrow and let you know how I get on.
Thanks.
The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.
I thought about the solution and I see one possible problem with this approach. It might be that the engine still thinks that the VM is on the old cluster. Let me know if this happens, we can work around that too.
Roman
Thanks again, Scott
On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two
documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat...
https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat...
8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Actually, I figured out a work around. I changed the HostedEngine VM's vds_group_id in the database to the vds_group_id of my temporary cluster (found from the vds_groups table). This worked and I could put my main cluster in upgrade mode. Now to continue the process... Thanks, Scott On Fri, Jun 24, 2016, 9:29 AM Scott <romracer@gmail.com> wrote:
Hi Roman,
I made it through step 6 however it does look like the problem you mentioned has occurred. My engine VM is running on my host in the temporary cluster. The stats under Hosts show this. But in the Virtual Machines tab this VM still thinks its on my main cluster and I can't change that setting. Did you have a suggestion on how to work around this? Thankfully only one of my RHEV instances has this upgrade path.
Thanks for your help, Scott
On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr <rmohr@redhat.com> wrote:
On Thu, Jun 23, 2016 at 10:26 PM, Scott <romracer@gmail.com> wrote:
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined and I think its easier than what I thought of (moving my self hosted engine back to physical hardware, upgrading and moving it back to self hosted). I will give it a spin in my build RHEV cluster tomorrow and let you know how I get on.
Thanks.
The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.
I thought about the solution and I see one possible problem with this approach. It might be that the engine still thinks that the VM is on the old cluster. Let me know if this happens, we can work around that too.
Roman
Thanks again, Scott
On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two
documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat...
https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat...
8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

--Apple-Mail=_1629B641-0FB8-478B-8E69-F0798CBFE027 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=utf-8
On 24 Jun 2016, at 18:34, Scott <romracer@gmail.com> wrote: =20 Actually, I figured out a work around. I changed the HostedEngine VM's = vds_group_id in the database to the vds_group_id of my temporary cluster = (found from the vds_groups table). This worked and I could put my main = cluster in upgrade mode. Now to continue the process... =20 =20
Thanks, Scott =20 =20 On Fri, Jun 24, 2016, 9:29 AM Scott <romracer@gmail.com = <mailto:romracer@gmail.com>> wrote: Hi Roman, =20 I made it through step 6 however it does look like the problem you = mentioned has occurred. My engine VM is running on my host in the = temporary cluster. The stats under Hosts show this. But in the Virtual = Machines tab this VM still thinks its on my main cluster and I can't = change that setting. Did you have a suggestion on how to work around =
Note you don=E2=80=99t really need an upgrade mode/policy if you already = create a temporary cluster.=20 Putting aside the HE problem,If you need your other VMs running then you = can also just cross migrate them to the new el7 cluster (in 3.5 mode). = If you have many hosts you can just move one by one as you = upgrade/reinstall them, and keep migrating VMs from the old one to the = new depending on your capacity. At the end you can just remove the old = empty cluster and rename the new one back to the original name:) Shutting down unneeded VMs will save you time Note that once you have a 3.5 cluster all with el7 hosts in order to = upgrade the cluster level to 3.6 you anyway need to shut down the VMs in = order for their configuration changes to happen. Thanks, michal this? Thankfully only one of my RHEV instances has this upgrade path.
=20 Thanks for your help, Scott =20 On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr <rmohr@redhat.com = <mailto:rmohr@redhat.com>> wrote: On Thu, Jun 23, 2016 at 10:26 PM, Scott <romracer@gmail.com = <mailto:romracer@gmail.com>> wrote:
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined = and I think its easier than what I thought of (moving my self hosted = engine back to physical hardware, upgrading and moving it back to self hosted). = I will give it a spin in my build RHEV cluster tomorrow and let you know = how I get on.
=20 Thanks. =20 The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=3D1349745 = <https://bugzilla.redhat.com/show_bug.cgi?id=3D1349745>. =20 I thought about the solution and I see one possible problem with this approach. It might be that the engine still thinks that the VM is on the old cluster. Let me know if this happens, we can work around that too. =20 Roman =20
Thanks again, Scott
On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com = <mailto:rmohr@redhat.com>> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com =
<mailto:romracer@gmail.com>> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment = running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two = documents:
= https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualiz= ation/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_f= rom_6_to_7.html = <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali= zation/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_= from_6_to_7.html> https://access.redhat.com/solutions/2300331 = <https://access.redhat.com/solutions/2300331>
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the = following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being = mid-upgrade. And even if I could, the setting its complaining about can't be = managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it = seems now, the InClusterUpgrade scheduling policy is useless and can't = actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine = VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main = cluster 7) Proceed with your main cluster like described in
= https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualiz= ation/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_f= rom_6_to_7.html = <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtuali= zation/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_Engine_= from_6_to_7.html> 8) When all hosts are upgraded and InClusterUpgrade policy is = disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users =
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_1629B641-0FB8-478B-8E69-F0798CBFE027 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=utf-8 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D""><br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On 24 Jun 2016, at 18:34, Scott <<a = href=3D"mailto:romracer@gmail.com" class=3D"">romracer@gmail.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><div class=3D""><p = dir=3D"ltr" class=3D"">Actually, I figured out a work around. I changed = the HostedEngine VM's vds_group_id in the database to the vds_group_id = of my temporary cluster (found from the vds_groups table). This worked = and I could put my main cluster in upgrade mode. Now to continue the = process...</p><div class=3D""><br = class=3D""></div></div></blockquote><div><br class=3D""></div>Note you = don=E2=80=99t really need an upgrade mode/policy if you already create a = temporary cluster. </div><div>Putting aside the HE problem,If you = need your other VMs running then you can also just cross migrate them to = the new el7 cluster (in 3.5 mode). If you have many hosts you can just = move one by one as you upgrade/reinstall them, and keep migrating VMs = from the old one to the new depending on your capacity. At the end you = can just remove the old empty cluster and rename the new one back to the = original name:)</div><div>Shutting down unneeded VMs will save you = time</div><div><br class=3D""></div><div>Note that once you have a 3.5 = cluster all with el7 hosts in order to upgrade the cluster level to 3.6 = you anyway need to shut down the VMs in order for their configuration = changes to happen.</div><div><br = class=3D""></div><div>Thanks,</div><div>michal<br class=3D""><blockquote = type=3D"cite" class=3D""><div class=3D""><p dir=3D"ltr" = class=3D"">Thanks,<br class=3D""> Scott</p> <br class=3D""><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"">On = Fri, Jun 24, 2016, 9:29 AM Scott <<a href=3D"mailto:romracer@gmail.com"= class=3D"">romracer@gmail.com</a>> wrote:<br = class=3D""></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"ltr" = class=3D"">Hi Roman,<div class=3D""><br class=3D""></div><div class=3D"">I= made it through step 6 however it does look like the problem you = mentioned has occurred. My engine VM is running on my host in the = temporary cluster. The stats under Hosts show this. But in = the Virtual Machines tab this VM still thinks its on my main cluster and = I can't change that setting. Did you have a suggestion on how to = work around this? Thankfully only one of my RHEV instances has = this upgrade path.</div><div class=3D""><br class=3D""></div><div = class=3D"">Thanks for your help,</div><div class=3D"">Scott</div></div><br= class=3D""><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D"">On = Fri, Jun 24, 2016 at 2:15 AM Roman Mohr <<a = href=3D"mailto:rmohr@redhat.com" target=3D"_blank" = class=3D"">rmohr@redhat.com</a>> wrote:<br class=3D""></div><blockquote= class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc = solid;padding-left:1ex">On Thu, Jun 23, 2016 at 10:26 PM, Scott <<a = href=3D"mailto:romracer@gmail.com" target=3D"_blank" = class=3D"">romracer@gmail.com</a>> wrote:<br class=3D""> > Hi Roman,<br class=3D""> ><br class=3D""> > Thanks for the detailed steps. I follow the idea you have = outlined and I<br class=3D""> > think its easier than what I thought of (moving my self hosted = engine back<br class=3D""> > to physical hardware, upgrading and moving it back to self = hosted). I will<br class=3D""> > give it a spin in my build RHEV cluster tomorrow and let you know = how I get<br class=3D""> > on.<br class=3D""> ><br class=3D""> <br class=3D""> Thanks.<br class=3D""> <br class=3D""> The bug is here: <a = href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1349745" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">https://bugzilla.redhat.com/show_bug.cgi?id=3D1349745</a>.<br = class=3D""> <br class=3D""> I thought about the solution and I see one possible problem with this<br = class=3D""> approach. It might be that the engine still thinks that the VM is on<br = class=3D""> the old cluster.<br class=3D""> Let me know if this happens, we can work around that too.<br class=3D""> <br class=3D""> Roman<br class=3D""> <br class=3D""> > Thanks again,<br class=3D""> > Scott<br class=3D""> ><br class=3D""> > On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <<a = href=3D"mailto:rmohr@redhat.com" target=3D"_blank" = class=3D"">rmohr@redhat.com</a>> wrote:<br class=3D""> >><br class=3D""> >> Hi Scott,<br class=3D""> >><br class=3D""> >> On Thu, Jun 23, 2016 at 8:54 PM, Scott <<a = href=3D"mailto:romracer@gmail.com" target=3D"_blank" = class=3D"">romracer@gmail.com</a>> wrote:<br class=3D""> >> > Hello list,<br class=3D""> >> ><br class=3D""> >> > I'm trying to upgrade a self-hosted engine RHEV = environment running<br class=3D""> >> > 3.5/el6<br class=3D""> >> > to 3.6/el7. I'm following the process outlined in = these two documents:<br class=3D""> >> ><br class=3D""> >> ><br class=3D""> >> > <a = href=3D"https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_V= irtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_= Engine_from_6_to_7.html" rel=3D"noreferrer" target=3D"_blank" = class=3D"">https://access.redhat.com/documentation/en-US/Red_Hat_Enterpris= e_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Host= ed_Engine_from_6_to_7.html</a><br class=3D""> >> > <a href=3D"https://access.redhat.com/solutions/2300331" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">https://access.redhat.com/solutions/2300331</a><br class=3D""> >> ><br class=3D""> >> > The problem I'm having is I don't seem to be able to apply = the<br class=3D""> >> > "InClusterUpgrade" policy (procedure 5.5, step 4). I = get the following<br class=3D""> >> > error:<br class=3D""> >> ><br class=3D""> >> > Can not start cluster upgrade mode, see below for = details:<br class=3D""> >> > VM HostedEngine with id = 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is<br class=3D""> >> > configured<br class=3D""> >> > to be not migratable.<br class=3D""> >> ><br class=3D""> >> That is correct, only the he-agents on each host decide where = the<br class=3D""> >> hosted engine VM can start<br class=3D""> >><br class=3D""> >> > But the HostedEngine VM is not one I can edit due to being = mid-upgrade.<br class=3D""> >> > And<br class=3D""> >> > even if I could, the setting its complaining about can't = be managed by<br class=3D""> >> > the<br class=3D""> >> > engine (I tried in another RHEV instance).<br class=3D""> >> ><br class=3D""> >> Also true, it is very limited what you can currently do with = the<br class=3D""> >> hosted engine VM.<br class=3D""> >><br class=3D""> >><br class=3D""> >> > Is this a bug? What am I missing to be able to move = on? As it seems<br class=3D""> >> > now,<br class=3D""> >> > the InClusterUpgrade scheduling policy is useless and = can't actually be<br class=3D""> >> > used.<br class=3D""> >><br class=3D""> >> That is indeed something the InClusterUpgrade does not take = into<br class=3D""> >> consideration. I will file a bug report.<br class=3D""> >><br class=3D""> >> But what you can do is the following:<br class=3D""> >><br class=3D""> >> You can create a temporary cluster, move one host and the = hosted<br class=3D""> >> engine VM there, upgrade all hosts and then start the = hosted-engine VM<br class=3D""> >> in the original cluster again.<br class=3D""> >><br class=3D""> >> The detailed steps are:<br class=3D""> >><br class=3D""> >> 1) Enter the global maintenance mode<br class=3D""> >> 2) Create a temporary cluster<br class=3D""> >> 3) Put one of the hosted engine hosts which does not currently = host<br class=3D""> >> the engine into maintenance<br class=3D""> >> 4) Move this host to the temporary cluster<br class=3D""> >> 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` = (it<br class=3D""> >> should not come up again since you are in maintenance mode)<br = class=3D""> >> 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on = the<br class=3D""> >> host in the temporary cluster<br class=3D""> >> 7) Now you can enable the InClusterUpgrade policy on your main = cluster<br class=3D""> >> 7) Proceed with your main cluster like described in<br = class=3D""> >><br class=3D""> >> <a = href=3D"https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_V= irtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Hosted_= Engine_from_6_to_7.html" rel=3D"noreferrer" target=3D"_blank" = class=3D"">https://access.redhat.com/documentation/en-US/Red_Hat_Enterpris= e_Virtualization/3.6/html/Self-Hosted_Engine_Guide/Upgrading_the_Self-Host= ed_Engine_from_6_to_7.html</a><br class=3D""> >> 8) When all hosts are upgraded and InClusterUpgrade policy is = disabled<br class=3D""> >> again, move the hosted-engine-vm back to the original = cluster<br class=3D""> >> 9) Upgrade the last host<br class=3D""> >> 10) Migrate the last host back<br class=3D""> >> 11) Delete the temporary cluster<br class=3D""> >> 12) Deactivate maintenance mode<br class=3D""> >><br class=3D""> >> Adding Sandro and Roy to keep me honest.<br class=3D""> >><br class=3D""> >> Roman<br class=3D""> >><br class=3D""> >> ><br class=3D""> >> > Thanks for any suggestions/help,<br class=3D""> >> > Scott<br class=3D""> >> ><br class=3D""> >> > _______________________________________________<br = class=3D""> >> > Users mailing list<br class=3D""> >> > <a href=3D"mailto:Users@ovirt.org" target=3D"_blank" = class=3D"">Users@ovirt.org</a><br class=3D""> >> > <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = rel=3D"noreferrer" target=3D"_blank" = class=3D"">http://lists.ovirt.org/mailman/listinfo/users</a><br = class=3D""> >> ><br class=3D""> </blockquote></div></blockquote></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></body></html>= --Apple-Mail=_1629B641-0FB8-478B-8E69-F0798CBFE027--

On Fri, Jun 24, 2016 at 6:34 PM, Scott <romracer@gmail.com> wrote:
Actually, I figured out a work around. I changed the HostedEngine VM's vds_group_id in the database to the vds_group_id of my temporary cluster (found from the vds_groups table). This worked and I could put my main cluster in upgrade mode. Now to continue the process...
That's exactly what I had in mind. I hope you made it through the whole process. Roman
Thanks, Scott
On Fri, Jun 24, 2016, 9:29 AM Scott <romracer@gmail.com> wrote:
Hi Roman,
I made it through step 6 however it does look like the problem you mentioned has occurred. My engine VM is running on my host in the temporary cluster. The stats under Hosts show this. But in the Virtual Machines tab this VM still thinks its on my main cluster and I can't change that setting. Did you have a suggestion on how to work around this? Thankfully only one of my RHEV instances has this upgrade path.
Thanks for your help, Scott
On Fri, Jun 24, 2016 at 2:15 AM Roman Mohr <rmohr@redhat.com> wrote:
On Thu, Jun 23, 2016 at 10:26 PM, Scott <romracer@gmail.com> wrote:
Hi Roman,
Thanks for the detailed steps. I follow the idea you have outlined and I think its easier than what I thought of (moving my self hosted engine back to physical hardware, upgrading and moving it back to self hosted). I will give it a spin in my build RHEV cluster tomorrow and let you know how I get on.
Thanks.
The bug is here: https://bugzilla.redhat.com/show_bug.cgi?id=1349745.
I thought about the solution and I see one possible problem with this approach. It might be that the engine still thinks that the VM is on the old cluster. Let me know if this happens, we can work around that too.
Roman
Thanks again, Scott
On Thu, Jun 23, 2016 at 2:41 PM Roman Mohr <rmohr@redhat.com> wrote:
Hi Scott,
On Thu, Jun 23, 2016 at 8:54 PM, Scott <romracer@gmail.com> wrote:
Hello list,
I'm trying to upgrade a self-hosted engine RHEV environment running 3.5/el6 to 3.6/el7. I'm following the process outlined in these two documents:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... https://access.redhat.com/solutions/2300331
The problem I'm having is I don't seem to be able to apply the "InClusterUpgrade" policy (procedure 5.5, step 4). I get the following error:
Can not start cluster upgrade mode, see below for details: VM HostedEngine with id 5ca9cb38-82e5-4eea-8ff6-e2bc33598211 is configured to be not migratable.
That is correct, only the he-agents on each host decide where the hosted engine VM can start
But the HostedEngine VM is not one I can edit due to being mid-upgrade. And even if I could, the setting its complaining about can't be managed by the engine (I tried in another RHEV instance).
Also true, it is very limited what you can currently do with the hosted engine VM.
Is this a bug? What am I missing to be able to move on? As it seems now, the InClusterUpgrade scheduling policy is useless and can't actually be used.
That is indeed something the InClusterUpgrade does not take into consideration. I will file a bug report.
But what you can do is the following:
You can create a temporary cluster, move one host and the hosted engine VM there, upgrade all hosts and then start the hosted-engine VM in the original cluster again.
The detailed steps are:
1) Enter the global maintenance mode 2) Create a temporary cluster 3) Put one of the hosted engine hosts which does not currently host the engine into maintenance 4) Move this host to the temporary cluster 5) Stop the hosted-engine-vm with `hosted-engine --destroy-vm` (it should not come up again since you are in maintenance mode) 6) Start the hosted-egine-vm with `hosted-engine --start-vm` on the host in the temporary cluster 7) Now you can enable the InClusterUpgrade policy on your main cluster 7) Proceed with your main cluster like described in
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualizat... 8) When all hosts are upgraded and InClusterUpgrade policy is disabled again, move the hosted-engine-vm back to the original cluster 9) Upgrade the last host 10) Migrate the last host back 11) Delete the temporary cluster 12) Deactivate maintenance mode
Adding Sandro and Roy to keep me honest.
Roman
Thanks for any suggestions/help, Scott
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (3)
-
Michal Skrivanek
-
Roman Mohr
-
Scott