global vs local maintenance with single host

Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted engine and leaving global maintenance and having checked all is ok, what is the correct mode then to put host if I want finally to update it too? Thanks, Gianluca

On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted engine and leaving global maintenance and having checked all is ok, what is the correct mode then to put host if I want finally to update it too?
The docs say to put hosts to maintenance from the engine before upgrading them. This is (also) so that VMs on them are migrated away to other hosts. With a single host, you have no other hosts to migrate VMs to. So you should do something like this: 1. Set global maintenance (because you are going to take down the engine and its vm) 2. Shutdown all other VMs 3. Shutdown engine vm from itself At this point, you should be able to simply stop HA services. But it might be cleaner to first set local maintenance. Not sure but perhaps this might be required for vdsm. So: 4. Set local maintenance 5. Stop HA services. If setting local maintenance didn't work, perhaps better stop also vdsm services. This stop should obviously happen automatically by yum/rpm, but perhaps better do this manually to see that it worked. 6. yum (or dnf) update stuff. 7. Start HA services 8. Check status. I think you'll see that both local and global maint are still set. 9. Set maintenance to none 10. Check status again - I think that after some time HA will decide to start engine vm and should succeed. 11. Start all other VMs. Didn't try this myself. Best, -- Didi

On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted engine and leaving global maintenance and having checked all is ok, what is the correct mode then to put host if I want finally to update it too?
The docs say to put hosts to maintenance from the engine before upgrading them.
This is (also) so that VMs on them are migrated away to other hosts.
With a single host, you have no other hosts to migrate VMs to.
So you should do something like this:
1. Set global maintenance (because you are going to take down the engine and its vm) 2. Shutdown all other VMs 3. Shutdown engine vm from itself At this point, you should be able to simply stop HA services. But it might be cleaner to first set local maintenance. Not sure but perhaps this might be required for vdsm. So: 4. Set local maintenance 5. Stop HA services. If setting local maintenance didn't work, perhaps better stop also vdsm services. This stop should obviously happen automatically by yum/rpm, but perhaps better do this manually to see that it worked. 6. yum (or dnf) update stuff. 7. Start HA services 8. Check status. I think you'll see that both local and global maint are still set. 9. Set maintenance to none 10. Check status again - I think that after some time HA will decide to start engine vm and should succeed. 11. Start all other VMs.
Didn't try this myself.
Best, -- Didi
Hello Didi, I would like to leverage the update I have to do on 2 small different lab environments to crosscheck the steps suggested. They are both single host environments with self hosted engine. One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2 I plan to migrate to the just released 4.0.4 One note: in both environments the storage is NFS and is provided by the host itself, so a corner case (for all hosted_storage domain, main data domain and iso storage domain). I customized the init scripts, basically for start phase of the server and to keep in count of the NFS service, but probably something has to be done for stop too? 1) In /usr/lib/systemd/system/ovirt-ha-broker.service added in section [Unit] After=nfs-server.service The file is overwritten at update so one has to keep in mind this 2) also in vdsmd.service changed from: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service to: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service \ nfs-server.service Do you think any order setup I have to put in place related to NFS service and oVirt services stop?

--Apple-Mail=_E551DCFC-50E2-4FC1-9416-B1200B5AA19F Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Hi Gianluca, Instead of editing the system's built in systemd configuration, you can = do the following... Create a file called /etc/systemd/system/ovirt-ha-broker.service # My custom ovirt-ha-broker.service config that ensures NFS starts = before ovirt-ha-broker.service # thanks Gervais for this tip! :-) .include /usr/lib/systemd/system/ovirt-ha-broker.service [Unit] After=3Dnfs-server.service Then disable and enable ovirt-ha-broker.service (systemctl disable = ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and = you should see that it is using your customized systemd unit definition. = You can see that systemd is using your file by running systemctl status = ovirt-ha-broker.service. You'll see something like "Loaded: loaded = (/etc/systemd/system/ovirt-ha-broker.service;" in the output. Your file will survive updates and therefore always wait for nfs to = start prior to starting. You can do the same for your other = customizations. Cheers, Gervais > On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi = <gianluca.cecchi@gmail.com> wrote: >=20 > On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <didi@redhat.com = <mailto:didi@redhat.com>> wrote: > On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi > <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote: > > Hello, > > how do the two modes apply in case of single host? > > During an upgrade phase, after having upgraded the self hosted = engine and > > leaving global maintenance and having checked all is ok, what is the = correct > > mode then to put host if I want finally to update it too? >=20 > The docs say to put hosts to maintenance from the engine before = upgrading them. >=20 > This is (also) so that VMs on them are migrated away to other hosts. >=20 > With a single host, you have no other hosts to migrate VMs to. >=20 > So you should do something like this: >=20 > 1. Set global maintenance (because you are going to take down the > engine and its vm) > 2. Shutdown all other VMs > 3. Shutdown engine vm from itself > At this point, you should be able to simply stop HA services. But it > might be cleaner to first set local maintenance. Not sure but perhaps > this might be required for vdsm. So: > 4. Set local maintenance > 5. Stop HA services. If setting local maintenance didn't work, perhaps > better stop also vdsm services. This stop should obviously happen > automatically by yum/rpm, but perhaps better do this manually to see > that it worked. > 6. yum (or dnf) update stuff. > 7. Start HA services > 8. Check status. I think you'll see that both local and global maint > are still set. > 9. Set maintenance to none > 10. Check status again - I think that after some time HA will decide > to start engine vm and should succeed. > 11. Start all other VMs. >=20 > Didn't try this myself. >=20 > Best, > -- > Didi >=20 > Hello Didi, > I would like to leverage the update I have to do on 2 small different = lab environments to crosscheck the steps suggested. > They are both single host environments with self hosted engine. > One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2 > I plan to migrate to the just released 4.0.4 >=20 > One note: in both environments the storage is NFS and is provided by = the host itself, so a corner case (for all hosted_storage domain, main = data domain and iso storage domain). > I customized the init scripts, basically for start phase of the server = and to keep in count of the NFS service, but probably something has to = be done for stop too? >=20 > 1) In /usr/lib/systemd/system/ovirt-ha-broker.service >=20 > added in section [Unit] >=20 > After=3Dnfs-server.service >=20 > The file is overwritten at update so one has to keep in mind this >=20 > 2) also in vdsmd.service changed=20 > from: > After=3Dmultipathd.service libvirtd.service iscsid.service = rpcbind.service \ > supervdsmd.service sanlock.service vdsm-network.service >=20 > to: > After=3Dmultipathd.service libvirtd.service iscsid.service = rpcbind.service \ > supervdsmd.service sanlock.service vdsm-network.service \ > nfs-server.service >=20 > Do you think any order setup I have to put in place related to NFS = service and oVirt services stop? >=20 > _______________________________________________ > Users mailing list > Users@ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --Apple-Mail=_E551DCFC-50E2-4FC1-9416-B1200B5AA19F Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" = class=3D"">Hi Gianluca,<div class=3D""><br class=3D""></div><div = class=3D"">Instead of editing the system's built in systemd = configuration, you can do the following...</div><div class=3D""><br = class=3D""></div><div class=3D"">Create a file called = /etc/systemd/system/ovirt-ha-broker.service</div><div class=3D""><br = class=3D"webkit-block-placeholder"></div><blockquote style=3D"margin: = 0px 0px 0px 40px; border: none; padding: 0px;" class=3D""><div = class=3D""><font color=3D"#0433ff" class=3D""># My = custom ovirt-ha-broker.service config that ensures NFS starts = before </font><span style=3D"color: rgb(4, 51, 255);" = class=3D"">ovirt-ha-broker.service</span></div><div class=3D""><span = style=3D"color: rgb(4, 51, 255);" class=3D""># thanks Gervais for this = tip! :-)</span></div><div class=3D""><font color=3D"#0433ff" = class=3D""><br class=3D""></font></div><div class=3D""><font = color=3D"#0433ff" = class=3D"">.include /usr/lib/systemd/system/ovirt-ha-broker.service</= font></div><div class=3D""><font color=3D"#0433ff" class=3D""><br = class=3D""></font></div><div class=3D""><font color=3D"#0433ff" = class=3D"">[Unit]</font></div><div class=3D""><font color=3D"#0433ff" = class=3D"">After=3Dnfs-server.service</font></div></blockquote><div = class=3D""><br class=3D""></div>Then disable and = enable ovirt-ha-broker.service (systemctl disable = ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and = you should see that it is using your customized systemd unit definition. = You can see that systemd is using your file by running systemctl status = ovirt-ha-broker.service. You'll see something like "Loaded: loaded = (/etc/systemd/system/ovirt-ha-broker.service;" in the output.<div = class=3D""><br class=3D""></div><div class=3D"">Your file will survive = updates and therefore always wait for nfs to start prior to starting. = You can do the same for your other customizations.<br class=3D""><div = class=3D""> <div id=3D"signature" class=3D""><br class=3D"">Cheers,<br = class=3D"">Gervais<br class=3D""><br class=3D""><br class=3D""></div> </div> <br class=3D""><div><blockquote type=3D"cite" class=3D""><div = class=3D"">On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi <<a = href=3D"mailto:gianluca.cecchi@gmail.com" = class=3D"">gianluca.cecchi@gmail.com</a>> wrote:</div><br = class=3D"Apple-interchange-newline"><div class=3D""><div dir=3D"ltr" = class=3D""><div class=3D"gmail_extra"><div class=3D"gmail_quote">On Sun, = Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <span dir=3D"ltr" = class=3D""><<a href=3D"mailto:didi@redhat.com" target=3D"_blank" = class=3D"">didi@redhat.com</a>></span> wrote:<br class=3D""><blockquote= class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px = solid rgb(204,204,204);padding-left:1ex"><span class=3D"gmail-">On Sat, = Sep 3, 2016 at 1:18 PM, Gianluca Cecchi<br class=3D""> <<a href=3D"mailto:gianluca.cecchi@gmail.com" = class=3D"">gianluca.cecchi@gmail.com</a>> wrote:<br class=3D""> > Hello,<br class=3D""> > how do the two modes apply in case of single host?<br class=3D""> > During an upgrade phase, after having upgraded the self hosted = engine and<br class=3D""> > leaving global maintenance and having checked all is ok, what is = the correct<br class=3D""> > mode then to put host if I want finally to update it too?<br = class=3D""> <br class=3D""> </span>The docs say to put hosts to maintenance from the engine before = upgrading them.<br class=3D""> <br class=3D""> This is (also) so that VMs on them are migrated away to other hosts.<br = class=3D""> <br class=3D""> With a single host, you have no other hosts to migrate VMs to.<br = class=3D""> <br class=3D""> So you should do something like this:<br class=3D""> <br class=3D""> 1. Set global maintenance (because you are going to take down the<br = class=3D""> engine and its vm)<br class=3D""> 2. Shutdown all other VMs<br class=3D""> 3. Shutdown engine vm from itself<br class=3D""> At this point, you should be able to simply stop HA services. But it<br = class=3D""> might be cleaner to first set local maintenance. Not sure but perhaps<br = class=3D""> this might be required for vdsm. So:<br class=3D""> 4. Set local maintenance<br class=3D""> 5. Stop HA services. If setting local maintenance didn't work, = perhaps<br class=3D""> better stop also vdsm services. This stop should obviously happen<br = class=3D""> automatically by yum/rpm, but perhaps better do this manually to see<br = class=3D""> that it worked.<br class=3D""> 6. yum (or dnf) update stuff.<br class=3D""> 7. Start HA services<br class=3D""> 8. Check status. I think you'll see that both local and global maint<br = class=3D""> are still set.<br class=3D""> 9. Set maintenance to none<br class=3D""> 10. Check status again - I think that after some time HA will decide<br = class=3D""> to start engine vm and should succeed.<br class=3D""> 11. Start all other VMs.<br class=3D""> <br class=3D""> Didn't try this myself.<br class=3D""> <br class=3D""> Best,<br class=3D""> <span class=3D"gmail-HOEnZb"><font color=3D"#888888" class=3D"">--<br = class=3D""> Didi<br class=3D""> </font></span></blockquote></div><br class=3D""></div><div = class=3D"gmail_extra">Hello Didi,</div><div class=3D"gmail_extra">I = would like to leverage the update I have to do on 2 small different lab = environments to crosscheck the steps suggested.</div><div = class=3D"gmail_extra">They are both single host environments with self = hosted engine.</div><div class=3D"gmail_extra">One is 4.0.2 and the = other is 4.0.3. Both on CentoS 7.2</div><div class=3D"gmail_extra">I = plan to migrate to the just released 4.0.4</div><div = class=3D"gmail_extra"><br class=3D""></div><div class=3D"gmail_extra">One = note: in both environments the storage is NFS and is provided by the = host itself, so a corner case (for all hosted_storage domain, main data = domain and iso storage domain).</div><div class=3D"gmail_extra">I = customized the init scripts, basically for start phase of the server and = to keep in count of the NFS service, but probably something has to be = done for stop too?</div><div class=3D"gmail_extra"><br = class=3D""></div><div class=3D"gmail_extra">1) In = /usr/lib/systemd/system/ovirt-ha-broker.service</div><div = class=3D"gmail_extra"><br class=3D""></div><div = class=3D"gmail_extra">added in section [Unit]</div><div = class=3D"gmail_extra"><br class=3D""></div><div = class=3D"gmail_extra">After=3Dnfs-server.service</div><div = class=3D"gmail_extra"><br class=3D""></div><div class=3D"gmail_extra">The = file is overwritten at update so one has to keep in mind this</div><div = class=3D"gmail_extra"><br class=3D""></div><div class=3D"gmail_extra">2) = also in vdsmd.service changed </div><div = class=3D"gmail_extra">from:</div><div class=3D"gmail_extra"><div = class=3D"gmail_extra">After=3Dmultipathd.service libvirtd.service = iscsid.service rpcbind.service \</div><div class=3D"gmail_extra"> = supervdsmd.service sanlock.service = vdsm-network.service</div></div><div class=3D"gmail_extra"><br = class=3D""></div><div class=3D"gmail_extra">to:</div><div = class=3D"gmail_extra">After=3Dmultipathd.service libvirtd.service = iscsid.service rpcbind.service \<br class=3D""></div><div = class=3D"gmail_extra"><div class=3D"gmail_extra"> = supervdsmd.service sanlock.service vdsm-network.service \</div><div = class=3D"gmail_extra"> nfs-server.service</div><div = class=3D""><br class=3D""></div><div class=3D"">Do you think any order = setup I have to put in place related to NFS service and oVirt services = stop?</div></div><div class=3D""><br class=3D""></div></div> _______________________________________________<br class=3D"">Users = mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org" = class=3D"">Users@ovirt.org</a><br = class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br = class=3D""></div></blockquote></div><br class=3D""></div></body></html>= --Apple-Mail=_E551DCFC-50E2-4FC1-9416-B1200B5AA19F--

Il 28/Set/2016 21:09, "Gervais de Montbrun" <gervais@demontbrun.com> ha scritto:
Hi Gianluca,
Instead of editing the system's built in systemd configuration, you can
do the following...
Create a file called /etc/systemd/system/ovirt-ha-broker.service
# My custom ovirt-ha-broker.service config that ensures NFS starts
# thanks Gervais for this tip! :-)
.include /usr/lib/systemd/system/ovirt-ha-broker.service
[Unit] After=nfs-server.service
Then disable and enable ovirt-ha-broker.service (systemctl disable ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and you should see that it is using your customized systemd unit definition. You can see that systemd is using your file by running systemctl status ovirt-ha-broker.service. You'll see something like "Loaded: loaded (/etc/systemd/system/ovirt-ha-broker.service;" in the output.
Your file will survive updates and therefore always wait for nfs to start
before ovirt-ha-broker.service prior to starting. You can do the same for your other customizations.
Cheers, Gervais
On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com>
wrote:
On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <didi@redhat.com>
wrote:
On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted engine
and
leaving global maintenance and having checked all is ok, what is the correct mode then to put host if I want finally to update it too?
The docs say to put hosts to maintenance from the engine before upgrading them.
This is (also) so that VMs on them are migrated away to other hosts.
With a single host, you have no other hosts to migrate VMs to.
So you should do something like this:
1. Set global maintenance (because you are going to take down the engine and its vm) 2. Shutdown all other VMs 3. Shutdown engine vm from itself At this point, you should be able to simply stop HA services. But it might be cleaner to first set local maintenance. Not sure but perhaps this might be required for vdsm. So: 4. Set local maintenance 5. Stop HA services. If setting local maintenance didn't work, perhaps better stop also vdsm services. This stop should obviously happen automatically by yum/rpm, but perhaps better do this manually to see that it worked. 6. yum (or dnf) update stuff. 7. Start HA services 8. Check status. I think you'll see that both local and global maint are still set. 9. Set maintenance to none 10. Check status again - I think that after some time HA will decide to start engine vm and should succeed. 11. Start all other VMs.
Didn't try this myself.
Best, -- Didi
Hello Didi, I would like to leverage the update I have to do on 2 small different lab environments to crosscheck the steps suggested. They are both single host environments with self hosted engine. One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2 I plan to migrate to the just released 4.0.4
One note: in both environments the storage is NFS and is provided by the host itself, so a corner case (for all hosted_storage domain, main data domain and iso storage domain). I customized the init scripts, basically for start phase of the server and to keep in count of the NFS service, but probably something has to be done for stop too?
1) In /usr/lib/systemd/system/ovirt-ha-broker.service
added in section [Unit]
After=nfs-server.service
The file is overwritten at update so one has to keep in mind this
2) also in vdsmd.service changed from: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service
to: After=multipathd.service libvirtd.service iscsid.service rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service \ nfs-server.service
Do you think any order setup I have to put in place related to NFS service and oVirt services stop?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Nice! I'm going to try and see. Any particular dependency I should add for shutdown order due to the fact that my host is also the NFS server providing data stores? Do I need to set up nfs stop only after a particular ovirt related service? Thanks, Gianluca

On Oct 3, 2016, at 7:22 AM, Gianluca Cecchi = <gianluca.cecchi@gmail.com> wrote: =20 =20 Il 28/Set/2016 21:09, "Gervais de Montbrun" <gervais@demontbrun.com = <mailto:gervais@demontbrun.com>> ha scritto:
Hi Gianluca,
Instead of editing the system's built in systemd configuration, you =
can do the following...
Create a file called /etc/systemd/system/ovirt-ha-broker.service
# My custom ovirt-ha-broker.service config that ensures NFS starts =
before ovirt-ha-broker.service
# thanks Gervais for this tip! :-)
.include /usr/lib/systemd/system/ovirt-ha-broker.service
[Unit] After=3Dnfs-server.service
Then disable and enable ovirt-ha-broker.service (systemctl disable = ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and = you should see that it is using your customized systemd unit definition. = You can see that systemd is using your file by running systemctl status = ovirt-ha-broker.service. You'll see something like "Loaded: loaded = (/etc/systemd/system/ovirt-ha-broker.service;" in the output.
Your file will survive updates and therefore always wait for nfs to = start prior to starting. You can do the same for your other = customizations.
Cheers, Gervais
On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi = <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> wrote:
On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David = <didi@redhat.com <mailto:didi@redhat.com>> wrote:
On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com <mailto:gianluca.cecchi@gmail.com>> =
wrote:
Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted = engine and leaving global maintenance and having checked all is ok, what is =
mode then to put host if I want finally to update it too?
The docs say to put hosts to maintenance from the engine before = upgrading them.
This is (also) so that VMs on them are migrated away to other = hosts.
With a single host, you have no other hosts to migrate VMs to.
So you should do something like this:
1. Set global maintenance (because you are going to take down the engine and its vm) 2. Shutdown all other VMs 3. Shutdown engine vm from itself At this point, you should be able to simply stop HA services. But = it might be cleaner to first set local maintenance. Not sure but =
this might be required for vdsm. So: 4. Set local maintenance 5. Stop HA services. If setting local maintenance didn't work, =
better stop also vdsm services. This stop should obviously happen automatically by yum/rpm, but perhaps better do this manually to = see that it worked. 6. yum (or dnf) update stuff. 7. Start HA services 8. Check status. I think you'll see that both local and global =
are still set. 9. Set maintenance to none 10. Check status again - I think that after some time HA will = decide to start engine vm and should succeed. 11. Start all other VMs.
Didn't try this myself.
Best, -- Didi
Hello Didi, I would like to leverage the update I have to do on 2 small = different lab environments to crosscheck the steps suggested. They are both single host environments with self hosted engine. One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2 I plan to migrate to the just released 4.0.4
One note: in both environments the storage is NFS and is provided = by the host itself, so a corner case (for all hosted_storage domain, =
--Apple-Mail=_47811EC6-85A4-4378-9247-A576777E881E Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=us-ascii Hi Gianluca, I forgot to mention that you need to ensure that systemd knows that the = new file exists. You should likely run `systemctl daemon-reload` after = creating/modifying your custom systemd files. You can see that the After = directive is combined from both files. Check it out by running = `systemctl show vdsmd.service | grep After` It makes sense to make further changes to ensure that NFS stops last, = but I haven't looked into that yet. :-) Cheers, Gervais the correct perhaps perhaps maint main data domain and iso storage domain).
I customized the init scripts, basically for start phase of the = server and to keep in count of the NFS service, but probably something = has to be done for stop too?
1) In /usr/lib/systemd/system/ovirt-ha-broker.service
added in section [Unit]
After=3Dnfs-server.service
The file is overwritten at update so one has to keep in mind this
2) also in vdsmd.service changed=20 from: After=3Dmultipathd.service libvirtd.service iscsid.service = rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service
to: After=3Dmultipathd.service libvirtd.service iscsid.service = rpcbind.service \ supervdsmd.service sanlock.service vdsm-network.service \ nfs-server.service
Do you think any order setup I have to put in place related to NFS = service and oVirt services stop?
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users = <http://lists.ovirt.org/mailman/listinfo/users>
=20 Nice! I'm going to try and see. Any particular dependency I should add for shutdown order due to the = fact that my host is also the NFS server providing data stores? Do I need to set up nfs stop only after a particular ovirt related = service? Thanks, Gianluca
--Apple-Mail=_47811EC6-85A4-4378-9247-A576777E881E Content-Transfer-Encoding: 7bit Content-Type: text/html; charset=us-ascii <html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Gianluca,<div class=""><br class=""></div><div class="">I forgot to mention that you need to ensure that systemd knows that the new file exists. You should likely run `systemctl daemon-reload` after creating/modifying your custom systemd files. You can see that the After directive is combined from both files. Check it out by running `systemctl show vdsmd.service | grep After`</div><div class=""><br class=""></div><div class="">It makes sense to make further changes to ensure that NFS stops last, but I haven't looked into that yet.</div><div class="">:-)<br class=""><div class=""> <div id="signature" class=""><br class="">Cheers,<br class="">Gervais<br class=""><br class=""><br class=""></div> </div> <br class=""><div><blockquote type="cite" class=""><div class="">On Oct 3, 2016, at 7:22 AM, Gianluca Cecchi <<a href="mailto:gianluca.cecchi@gmail.com" class="">gianluca.cecchi@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class=""><br class="webkit-block-placeholder"></div><p dir="ltr" class="">Il 28/Set/2016 21:09, "Gervais de Montbrun" <<a href="mailto:gervais@demontbrun.com" class="">gervais@demontbrun.com</a>> ha scritto:<br class=""> ><br class=""> > Hi Gianluca,<br class=""> ><br class=""> > Instead of editing the system's built in systemd configuration, you can do the following...<br class=""> ><br class=""> > Create a file called /etc/systemd/system/ovirt-ha-broker.service<br class=""> ><br class=""> >> # My custom ovirt-ha-broker.service config that ensures NFS starts before ovirt-ha-broker.service<br class=""> >> # thanks Gervais for this tip! :-)<br class=""> >><br class=""> >> .include /usr/lib/systemd/system/ovirt-ha-broker.service<br class=""> >><br class=""> >> [Unit]<br class=""> >> After=nfs-server.service<br class=""> ><br class=""> ><br class=""> > Then disable and enable ovirt-ha-broker.service (systemctl disable ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and you should see that it is using your customized systemd unit definition. You can see that systemd is using your file by running systemctl status ovirt-ha-broker.service. You'll see something like "Loaded: loaded (/etc/systemd/system/ovirt-ha-broker.service;" in the output.<br class=""> ><br class=""> > Your file will survive updates and therefore always wait for nfs to start prior to starting. You can do the same for your other customizations.<br class=""> ><br class=""> > Cheers,<br class=""> > Gervais<br class=""> ><br class=""> ><br class=""> ><br class=""> >> On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi <<a href="mailto:gianluca.cecchi@gmail.com" class="">gianluca.cecchi@gmail.com</a>> wrote:<br class=""> >><br class=""> >> On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <<a href="mailto:didi@redhat.com" class="">didi@redhat.com</a>> wrote:<br class=""> >>><br class=""> >>> On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi<br class=""> >>> <<a href="mailto:gianluca.cecchi@gmail.com" class="">gianluca.cecchi@gmail.com</a>> wrote:<br class=""> >>> > Hello,<br class=""> >>> > how do the two modes apply in case of single host?<br class=""> >>> > During an upgrade phase, after having upgraded the self hosted engine and<br class=""> >>> > leaving global maintenance and having checked all is ok, what is the correct<br class=""> >>> > mode then to put host if I want finally to update it too?<br class=""> >>><br class=""> >>> The docs say to put hosts to maintenance from the engine before upgrading them.<br class=""> >>><br class=""> >>> This is (also) so that VMs on them are migrated away to other hosts.<br class=""> >>><br class=""> >>> With a single host, you have no other hosts to migrate VMs to.<br class=""> >>><br class=""> >>> So you should do something like this:<br class=""> >>><br class=""> >>> 1. Set global maintenance (because you are going to take down the<br class=""> >>> engine and its vm)<br class=""> >>> 2. Shutdown all other VMs<br class=""> >>> 3. Shutdown engine vm from itself<br class=""> >>> At this point, you should be able to simply stop HA services. But it<br class=""> >>> might be cleaner to first set local maintenance. Not sure but perhaps<br class=""> >>> this might be required for vdsm. So:<br class=""> >>> 4. Set local maintenance<br class=""> >>> 5. Stop HA services. If setting local maintenance didn't work, perhaps<br class=""> >>> better stop also vdsm services. This stop should obviously happen<br class=""> >>> automatically by yum/rpm, but perhaps better do this manually to see<br class=""> >>> that it worked.<br class=""> >>> 6. yum (or dnf) update stuff.<br class=""> >>> 7. Start HA services<br class=""> >>> 8. Check status. I think you'll see that both local and global maint<br class=""> >>> are still set.<br class=""> >>> 9. Set maintenance to none<br class=""> >>> 10. Check status again - I think that after some time HA will decide<br class=""> >>> to start engine vm and should succeed.<br class=""> >>> 11. Start all other VMs.<br class=""> >>><br class=""> >>> Didn't try this myself.<br class=""> >>><br class=""> >>> Best,<br class=""> >>> --<br class=""> >>> Didi<br class=""> >><br class=""> >><br class=""> >> Hello Didi,<br class=""> >> I would like to leverage the update I have to do on 2 small different lab environments to crosscheck the steps suggested.<br class=""> >> They are both single host environments with self hosted engine.<br class=""> >> One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2<br class=""> >> I plan to migrate to the just released 4.0.4<br class=""> >><br class=""> >> One note: in both environments the storage is NFS and is provided by the host itself, so a corner case (for all hosted_storage domain, main data domain and iso storage domain).<br class=""> >> I customized the init scripts, basically for start phase of the server and to keep in count of the NFS service, but probably something has to be done for stop too?<br class=""> >><br class=""> >> 1) In /usr/lib/systemd/system/ovirt-ha-broker.service<br class=""> >><br class=""> >> added in section [Unit]<br class=""> >><br class=""> >> After=nfs-server.service<br class=""> >><br class=""> >> The file is overwritten at update so one has to keep in mind this<br class=""> >><br class=""> >> 2) also in vdsmd.service changed <br class=""> >> from:<br class=""> >> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \<br class=""> >> supervdsmd.service sanlock.service vdsm-network.service<br class=""> >><br class=""> >> to:<br class=""> >> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \<br class=""> >> supervdsmd.service sanlock.service vdsm-network.service \<br class=""> >> nfs-server.service<br class=""> >><br class=""> >> Do you think any order setup I have to put in place related to NFS service and oVirt services stop?<br class=""> >><br class=""> >> _______________________________________________<br class=""> >> Users mailing list<br class=""> >> <a href="mailto:Users@ovirt.org" class="">Users@ovirt.org</a><br class=""> >> <a href="http://lists.ovirt.org/mailman/listinfo/users" class="">http://lists.ovirt.org/mailman/listinfo/users</a><br class=""> ><br class=""> ></p><p dir="ltr" class="">Nice! I'm going to try and see.<br class=""> Any particular dependency I should add for shutdown order due to the fact that my host is also the NFS server providing data stores?<br class=""> Do I need to set up nfs stop only after a particular ovirt related service?<br class=""> Thanks,<br class=""> Gianluca<br class=""> </p> </div></blockquote></div><br class=""></div></div> </body></html> --Apple-Mail=_47811EC6-85A4-4378-9247-A576777E881E--

On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <didi@redhat.com> wrote:
On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, how do the two modes apply in case of single host? During an upgrade phase, after having upgraded the self hosted engine and leaving global maintenance and having checked all is ok, what is the correct mode then to put host if I want finally to update it too?
The docs say to put hosts to maintenance from the engine before upgrading them.
This is (also) so that VMs on them are migrated away to other hosts.
With a single host, you have no other hosts to migrate VMs to.
So you should do something like this:
1. Set global maintenance (because you are going to take down the engine and its vm) 2. Shutdown all other VMs 3. Shutdown engine vm from itself At this point, you should be able to simply stop HA services. But it might be cleaner to first set local maintenance. Not sure but perhaps this might be required for vdsm. So: 4. Set local maintenance 5. Stop HA services. If setting local maintenance didn't work, perhaps better stop also vdsm services. This stop should obviously happen automatically by yum/rpm, but perhaps better do this manually to see that it worked. 6. yum (or dnf) update stuff. 7. Start HA services 8. Check status. I think you'll see that both local and global maint are still set. 9. Set maintenance to none 10. Check status again - I think that after some time HA will decide to start engine vm and should succeed. 11. Start all other VMs.
Didn't try this myself.
Best, -- Didi
I tested on one of the 2 environments. It seems it worked. But I update the kernel on host without restarting it. I would try that with the other one. Some notes: 8. Check status. I think you'll see that both local and global maint are still set. Actually even if I'm on global maintenance and then I set local maintenance, it seems I "loose" the global maintenance state... I see this output, without the line with Global Maintenance and exclamation marks....: [root@ractor ~]# hosted-engine --vm-status /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py:15: DeprecationWarning: vdscli uses xmlrpc. since ovirt 3.6 xmlrpc is deprecated, please use vdsm.jsonrpcvdscli import vdsm.vdscli --== Host 1 status ==-- Status up-to-date : False Hostname : ractor.mydomain Host ID : 1 Engine status : unknown stale-data Score : 0 stopped : False Local maintenance : True crc32 : d616dde1 Host timestamp : 3304360 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=3304360 (Mon Oct 3 22:27:07 2016) host-id=1 score=0 maintenance=True state=LocalMaintenance stopped=False [root@ractor ~]# I'm able to exit maintenance, connect to engine and start the other VMs. Now I have to try considering also the restart of the hypervisor host, due to new kernel package install. Gianluca
participants (3)
-
Gervais de Montbrun
-
Gianluca Cecchi
-
Yedidyah Bar David