[ovirt-users] global vs local maintenance with single host

Gervais de Montbrun gervais at demontbrun.com
Mon Oct 3 19:41:04 UTC 2016


Hi Gianluca,

I forgot to mention that you need to ensure that systemd knows that the new file exists. You should likely run `systemctl daemon-reload` after creating/modifying your custom systemd files. You can see that the After directive is combined from both files. Check it out by running `systemctl show vdsmd.service | grep After`

It makes sense to make further changes to ensure that NFS stops last, but I haven't looked into that yet.
:-)

Cheers,
Gervais



> On Oct 3, 2016, at 7:22 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote:
> 
> 
> Il 28/Set/2016 21:09, "Gervais de Montbrun" <gervais at demontbrun.com <mailto:gervais at demontbrun.com>> ha scritto:
> >
> > Hi Gianluca,
> >
> > Instead of editing the system's built in systemd configuration, you can do the following...
> >
> > Create a file called /etc/systemd/system/ovirt-ha-broker.service
> >
> >> # My custom ovirt-ha-broker.service config that ensures NFS starts before ovirt-ha-broker.service
> >> # thanks Gervais for this tip!  :-)
> >>
> >> .include /usr/lib/systemd/system/ovirt-ha-broker.service
> >>
> >> [Unit]
> >> After=nfs-server.service
> >
> >
> > Then disable and enable ovirt-ha-broker.service (systemctl disable ovirt-ha-broker.service ; systemctl enable ovirt-ha-broker.service) and you should see that it is using your customized systemd unit definition. You can see that systemd is using your file by running systemctl status ovirt-ha-broker.service. You'll see something like "Loaded: loaded (/etc/systemd/system/ovirt-ha-broker.service;" in the output.
> >
> > Your file will survive updates and therefore always wait for nfs to start prior to starting. You can do the same for your other customizations.
> >
> > Cheers,
> > Gervais
> >
> >
> >
> >> On Sep 28, 2016, at 1:31 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com <mailto:gianluca.cecchi at gmail.com>> wrote:
> >>
> >> On Sun, Sep 4, 2016 at 10:54 AM, Yedidyah Bar David <didi at redhat.com <mailto:didi at redhat.com>> wrote:
> >>>
> >>> On Sat, Sep 3, 2016 at 1:18 PM, Gianluca Cecchi
> >>> <gianluca.cecchi at gmail.com <mailto:gianluca.cecchi at gmail.com>> wrote:
> >>> > Hello,
> >>> > how do the two modes apply in case of single host?
> >>> > During an upgrade phase, after having upgraded the self hosted engine and
> >>> > leaving global maintenance and having checked all is ok, what is the correct
> >>> > mode then to put host if I want finally to update it too?
> >>>
> >>> The docs say to put hosts to maintenance from the engine before upgrading them.
> >>>
> >>> This is (also) so that VMs on them are migrated away to other hosts.
> >>>
> >>> With a single host, you have no other hosts to migrate VMs to.
> >>>
> >>> So you should do something like this:
> >>>
> >>> 1. Set global maintenance (because you are going to take down the
> >>> engine and its vm)
> >>> 2. Shutdown all other VMs
> >>> 3. Shutdown engine vm from itself
> >>> At this point, you should be able to simply stop HA services. But it
> >>> might be cleaner to first set local maintenance. Not sure but perhaps
> >>> this might be required for vdsm. So:
> >>> 4. Set local maintenance
> >>> 5. Stop HA services. If setting local maintenance didn't work, perhaps
> >>> better stop also vdsm services. This stop should obviously happen
> >>> automatically by yum/rpm, but perhaps better do this manually to see
> >>> that it worked.
> >>> 6. yum (or dnf) update stuff.
> >>> 7. Start HA services
> >>> 8. Check status. I think you'll see that both local and global maint
> >>> are still set.
> >>> 9. Set maintenance to none
> >>> 10. Check status again - I think that after some time HA will decide
> >>> to start engine vm and should succeed.
> >>> 11. Start all other VMs.
> >>>
> >>> Didn't try this myself.
> >>>
> >>> Best,
> >>> --
> >>> Didi
> >>
> >>
> >> Hello Didi,
> >> I would like to leverage the update I have to do on 2 small different lab environments to crosscheck the steps suggested.
> >> They are both single host environments with self hosted engine.
> >> One is 4.0.2 and the other is 4.0.3. Both on CentoS 7.2
> >> I plan to migrate to the just released 4.0.4
> >>
> >> One note: in both environments the storage is NFS and is provided by the host itself, so a corner case (for all hosted_storage domain, main data domain and iso storage domain).
> >> I customized the init scripts, basically for start phase of the server and to keep in count of the NFS service, but probably something has to be done for stop too?
> >>
> >> 1) In /usr/lib/systemd/system/ovirt-ha-broker.service
> >>
> >> added in section [Unit]
> >>
> >> After=nfs-server.service
> >>
> >> The file is overwritten at update so one has to keep in mind this
> >>
> >> 2) also in vdsmd.service changed 
> >> from:
> >> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
> >>       supervdsmd.service sanlock.service vdsm-network.service
> >>
> >> to:
> >> After=multipathd.service libvirtd.service iscsid.service rpcbind.service \
> >>       supervdsmd.service sanlock.service vdsm-network.service \
> >>       nfs-server.service
> >>
> >> Do you think any order setup I have to put in place related to NFS service and oVirt services stop?
> >>
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org <mailto:Users at ovirt.org>
> >> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
> >
> >
> 
> Nice! I'm going to try and see.
> Any particular dependency I should add for shutdown order due to the fact that my host is also the NFS server providing data stores?
> Do I need to set up nfs stop only after a particular ovirt related service?
> Thanks,
> Gianluca

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161003/25755fb8/attachment-0001.html>


More information about the Users mailing list