Hi all,
sorry for being late to such an interesting thread.
I discussed almost this same issue (properly and programmatically
shutting down a complete oVirt environment in a way that also
guarantees a clean and easy power up later) privately with some friends
some time ago.
Please note that the issue has been already discussed on the mailing
list before (we had started from those
hints):http://lists.ovirt.org/pipermail/users/2017-August/083667.html
I will translate here from Italian our description of the scenario,
hoping to add something to the discussion (maybe simply as another
use case):
Setup:
* We are talking about an hyperconverged oVirt+GlusterFS (HE-HC) setup
(let's say 1 or 3 nodes, but more should work the same)
* We are talking about abusing the "hyperconverged" term above (so
CTDB/Samba/Gluster-NFS/Gluster-
block are also running, directly on the nodes) ;-)
Business case:
* Let's say that we are in a small business setup and we do not have
the luxury of diesel-powered generators guaranteeing no black-outs
* Let's say that we have (intelligent) UPSs with limited battery, so
that we must make sure that a clean global power down gets initiated
as soon as the UPSs signal that a certain low threshold has been
passed (threshold to be carefully defined in order to give enough
time for a clean shutdown)
* Let's say that those UPSs may be:
* 1 UPS powering everything (smells single-point-of-failure,
but could be)
* 2 UPSs with all physical equipment having redundant (2) power cords
* 3 or more UPSs somehow variously connected
* Let's say the the UPSs may be network-monitored (SNMP on the
ovirtmgmt network) or directly attached to the nodes (USB/serial)
General strategy leading to shutdown decision:
* We want to centralize UPS management and use something like NUT[1]
running on the Engine vm
* Network controlled UPSs will be directly controlled by NUT running on
the Engine vm, while directly attached UPSs (USB/serial) will be
controlled by NUT running on the nodes they are attached to, but only
in a "proxy" mode (relaying actual control/logic to the NUT service
running on the Engine vm)
* A proper logic will be devised (knowing the capacity of each UPS, the
load it sustains and what actually means to power down those
connected equipment in view of quorum maintenance) in order to decide
whether a partial power down or a complete global power down are
needed, in case only a subset of UPSs should experience a low-battery
event (obviously a complete low-battery on all UPSs means global
power down)
Detailed strategy of shutdown implementation:
* A partial power down (only some nodes) means:
* Those nodes will be put in local maintenance (vms get automatically
migrated on other nodes or cleanly shut down if migration is
impossible because of constraints or limited resources; shutdown of
vms should respect proper order, using tags, dependency rules, HA
status or other hints) but without stopping GlusterFS services
(since there are further services depending on those, see below)
* Services running on those nodes get cleanly stopped:
* Proper stopping of oVirt HA Agent and Broker services on
those nodes
* Proper stopping of CTDB (brings down Samba too) and Gluster-block
(NFS-Ganesha too, if used instead of Gluster-NFS) services on
those nodes
* Clean unmounting of all still-mounted GlusterFS volumes on
those nodes
* Clean OS poweroff of those nodes
* A global power down of everything means:
* All guest vms (except the Engine) get cleanly shut down (by means
of oVirt guest agent), possibly in a proper dependency order (using
tags, dependency rules, HA status or other hints)
* All storage domains (except the Engine one) are put in maintenance
* Global oVirt maintenance is activated (no more HA actions to
guarantee that the Engine is up)
* Clean OS poweroff of the Engine vm
* Proper stopping of oVirt HA Agent and Broker services on all nodes
* Proper stopping of CTDB (brings down Samba too) and Gluster-
block (NFS-Ganesha too, if used instead of Gluster-NFS) services
on all nodes
* Clean unmounting of all still-mounted GlusterFS volumes on all
nodes
* Clean stop of all GlusterFS volumes (issued from a single,
chosen node)
* Clean OS poweroff of all nodes
Sorry for the lenghty email :-)
Many thanks.
Best regards,
Giuseppe
PS: I will read through the official Ansible role for shutdown asap (I surely still need a
lot of learning for writing proper Ansible playbooks... :-D )I just published our Ansible
mockup [2]of the above detailed global
strategy, but it's based on statically collected info and must be run
from an external machine, to say nothing of my awful Ansible style and
the complete lack of the NUT logic and configuration part
On Wed, Sep 12, 2018, at 16:15, Simone Tiraboschi wrote:
On Wed, Sep 12, 2018 at 3:49 PM Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:>> On Wed, Sep 12, 2018 at 10:03 AM Simone
Tiraboschi
> <stirabos(a)redhat.com> wrote:>>>
>>>
>>> Does it mean that I have to run the ansible-playbook command from
>>> an external server and use as host in inventory the engine server,
>>> or does it mean that the ansible-playbook command is to be run from
>>> within the server where the ovirt-engine service is running and so
>>> keep intact the lines inside the smple yal file:>>>> "
>>> - name: oVirt shutdown environment
>>> hosts: localhost
>>> connection: local
>>> "
>>>
>>
>> Both options are valid.
>
> Good! It seems it worked ok in shutdown mode (the default one) in a
> test hosted engine based 4.2.6 environment, where I have 2 hosts
> (both are hosted engine hosts), the hosted engine VM + 3 VMs>> Initially
ovnode2 is both SPM and hosts the HostedEngine VM
> If I run the playbook from inside ovmgr42:
>
> [root@ovmgr42 tests]# ansible-playbook test.yml
> [WARNING]: provided hosts list is empty, only localhost is
> available. Note that the implicit>> localhost does not match 'all'
>
> PLAY [oVirt shutdown environment]
> ******************************************************************>>
> TASK [oVirt.shutdown-env : Populate service facts]
> *************************************************>> ok: [localhost]
>
> TASK [oVirt.shutdown-env : Enforce ovirt-engine machine]
> *******************************************>> skipping: [localhost]
>
> TASK [oVirt.shutdown-env : Enforce ovirt-engine status]
> ********************************************>> skipping: [localhost]
>
> TASK [oVirt.shutdown-env : Login to oVirt]
> *********************************************************>> ok: [localhost]
>
> TASK [oVirt.shutdown-env : Get hosts]
> **************************************************************>> ok:
[localhost]
>
> TASK [oVirt.shutdown-env : set_fact]
> ***************************************************************ok:
> [localhost]>>
> TASK [oVirt.shutdown-env : Enforce global maintenance mode]
> ****************************************>> skipping: [localhost]
>
> TASK [oVirt.shutdown-env : Warn about HE global maintenace mode]
> ***********************************>> ok: [localhost] => {
> "msg": "HE global maintenance mode has been set; you have to exit
> it to get the engine VM started when needed\n">> }
>
> TASK [oVirt.shutdown-env : Shutdown of HE hosts]
> ***************************************************>> changed: [localhost]
=> (item= . . . u'name': u'ovnode1', . . .
> u'spm': {u'priority': 5, u'status':
u'none'}})>> changed: [localhost] => (item= . . . u'name':
u'ovnode2', . . .
> u'spm': {u'priority': 5, u'status': u'spm'}})>>
>
> TASK [oVirt.shutdown-env : Shutdown engine host/VM]
> ************************************************>> Connection to ovmgr42 closed
by remote host.
> Connection to ovmgr42 closed.
> [g.cecchi@ope46 ~]$
>
> At the end the 2 hosts (HP blades) are in power off state, as
> expected.>>
> ILO event log of ovnode1:
> Last Update Initial Update Count Description
> 09/12/2018 10:13 09/12/2018 10:13 1 Server power removed.
>
> ILO event log of ovnode2:
> Last Update Initial Update Count Description
> 09/12/2018 10:14 09/12/2018 10:14 1 Server power removed.
>
> Actually due to time settings, they are to be intended as 11:13 and
> 11:14 my local time>>
> In /var/log/libvirt/qemu/HostedEngine.log of node ovnode2
>
> 2018-09-11 17:04:16.388+0000: starting up libvirt version: 3.9.0, . .
> . hostname: ovnode2>> ...
> 2018-09-12 09:11:29.641+0000: shutting down, reason=shutdown
>
> Actually we are at 11:11 local time
>
> For now I have then manually restarted all the env
> I began starting from ovnode2 (that was SPM and with HostedEngine
> during shutdown), keeping ovnode1 powered off, and it took some time
> because I got some messages like this (to be read bottom up)>>
> Host ovnode1 failed to recover. 9/12/18 2:30:21 PM
> Host ovnode1 is non responsive. 9/12/18 2:30:21 PM
> ...
> Host ovnode1 is not responding. It will stay in Connecting state for
> a grace period of 60 seconds and after that an attempt to fence the
> host will be issued. 9/12/18 2:27:40 PM>> Failed to Reconstruct Master Domain
for Data Center MYDC42. 9/12/18
> 2:27:34 PM>> VDSM ovnode2 command ConnectStoragePoolVDS failed: Cannot find
master
> domain: u'spUUID=5af30d59-004c-02f2-01c9-0000000000b8,
sdUUID=cbc308db-5468-4e6d-aabb-
> f9d133d05de2' 9/12/18 2:27:33 PM>> Invalid status on Data Center MYDC42.
Setting status to Non
> Responsive. 9/12/18 2:27:27 PM>> ...
> ETL Service Started 9/12/18 2:26:27 PM
>
> With ovnode1 still powered off, if I try to start it from the
> gui I get:>>
> Trying to power on ovnode1 I get in events:
> Host ovnode1 became non responsive. Fence operation skipped as the
> system is still initializing and this is not a host where hosted
> engine was running on previously. 9/12/18 2:30:21 PM>>
> and as popup I get this "operation canceled" window:
>
https://drive.google.com/file/d/1IWXASJHRylZR6ePWtGUcKiLbYjg__eNS/view?us...
> What's the meaning?
> In the phrase "the system is still initializing and this is not a
> host where hosted engine was running" the term THIS to which host is
> referred?>> After some minutes I automatically get (to be read bottom up):
We are tracing ad discussing it here:
https://bugzilla.redhat.com/show_bug.cgi?id=1609029
As you noticed after a few minutes everything comes back to up status
but the startup phase is really confusing.> We are working on patch to provide a
smoother startup experience
although I don't any concrete drawback of the current code.>
>
> Host ovnode1 power management was verified successfully. 9/12/18
> 2:40:47 PM>> Status of host ovnode1 was set to Up. 9/12/18 2:40:47 PM
> ..
> No faulty multipath paths on host ovnode1 9/12/18 2:40:46 PM
> Storage Pool Manager runs on Host ovnode2 (Address: ovnode2), Data
> Center MYDC42. 9/12/18 2:37:55 PM>> Reconstruct Master Domain for Data Center
MYDC42 completed.
> 9/12/182:37:49 PM>> ..
> Host ovnode1 was started by SYSTEM. 9/12/18 2:32:37 PM
> Power management start of Host ovnode1 succeeded. 9/12/18 2:32:37 PM>>
Executing power management status on Host ovnode1 using Proxy Host
> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:26 PM>> Power
management start of Host ovnode1 initiated. 9/12/18 2:32:26 PM>> Auto fence for host
ovnode1 was started. 9/12/18 2:32:26 PM
> Storage Domain ISCSI_2TB (Data Center MYDC42) was deactivated by
> system because it's not visible by any of the hosts. 9/12/18
> 2:32:22 PM>> ..
> Executing power management status on Host ovnode1 using Proxy Host
> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:19 PM>> Power
management stop of Host ovnode1 initiated. 9/12/18 2:32:17 PM
> Executing power management status on Host ovnode1 using Proxy Host
> ovnode2 and Fence Agent ipmilan:172.16.1.52. 9/12/18 2:32:16 PM>> ...
> Host ovnode1 failed to recover. 9/12/18 2:30:21 PM
> Host ovnode1 is non responsive. 9/12/18 2:30:21 PM
>
> My questions are:
>
> - what if for some reason ovnode1 was not available during restart?
> Would have the system started the services anyway after some time
> in that case or could have it been a problem?>
ovnode1 will be in non operation state until available.
In the mean time the engine could elect a different SPM host
and so on.>
> - If I want to try to start the environment through ansible playbook
> I see that it seems I have to use "startup" tag, but it is not
> fully automated?>
As you can see from the playbook that role require an access to the
engine host or VM but not to each managed host.> This is required to fetch hosts list
from the engine, use its power
management capabilities, credentiuals and so on.> No host details are required for
playbook execution.
> "
> A startup mode is also available:
> in the startup mode the role will bring up all the IPMI configured
> hosts and it>> will unset the global maintenance mode if on an hosted-engine
> environment.>> The startup mode will be executed only if the 'startup'
tag is
> applied; shutdown mode is the default.>> The startup mode requires the engine
to be already up.
> "
> Is the last sentence referred to a non-hosted engine environment?
No, the engine host should be manually powered on if physical or the
at least one HE host (2 for the hyper converged case) should be
powered on.> Exiting global maintenance mode is up to the user as well.
> Otherwise I don't understand "will unset the global maintenance mode
> if on an hosted-engine environment.">
You can also manually power on the engine VM with hosted-engine --vm-
start on a specific host while still in global maintenance mode.>
> Also with IPMI does it mean in general the power mgmt feature (in my
> case I have iLO and not ipmilan) or what?>
yes, power management in general, sorry for the confusion.
> Where does it get the facts about hosts in hosted engine
> environment, as the engine is forcibly down if the hosted engine
> hosts are powered down?>
That's why the engine should be up.
>
> Thanks in advance for your time
> Gianluca
>
_________________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/76S4YUHUDLK...
Links:
1.
https://networkupstools.org/
2.
https://github.com/Heretic-oVirt/ansible/blob/master/hvp/roles/common/glo...