Automatically migrate VM between hosts in the same cluster

Hi list! I'm new by oVirt. Right now I configured a Cluster with two hosts and a VM. I can migrate the VM between the two hosts without any problem, but what I need is, that the VM automatically migrate if an host is down. The migration occurs just if I set an host in "Maintenance", but this is not (only!) what I need... Can someone help me to configure oVirt (3.5) to automatically check the hosts and migrate the VM on host failure? Thanks a lot! Mit freundlichen Grüßen Luca Bertoncello -- Besuchen Sie unsere Webauftritte: www.queo.biz Agentur für Markenführung und Kommunikation www.queoflow.com IT-Consulting und Individualsoftwareentwicklung Luca Bertoncello Administrator Telefon: +49 351 21 30 38 0 Fax: +49 351 21 30 38 99 E-Mail: l.bertoncello@queo-group.com queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077

On 17/09/15 14:25, Luca Bertoncello wrote:
Hi list!
I'm new by oVirt. Right now I configured a Cluster with two hosts and a VM. I can migrate the VM between the two hosts without any problem, but what I need is, that the VM automatically migrate if an host is down.
The migration occurs just if I set an host in "Maintenance", but this is not (only!) what I need...
Can someone help me to configure oVirt (3.5) to automatically check the hosts and migrate the VM on host failure?
Thanks a lot!
Mit freundlichen Grüßen
Luca Bertoncello
You can't live migrate on a host failure - as the host has gone down and all the running VMs on it have as well! It would require clairvoyance to enable live migration in that situation. However you can enable HA in VMs. If the host they are running on fails they will be restarted automatically on another host. NB. This *requires* power management so the failed host can be fenced. Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. "Transact" is operated by Integrated Financial Arrangements plc. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). .

Hello Alex
You can't live migrate on a host failure - as the host has gone down and all the running VMs on it have as well! It would require clairvoyance to enable live migration in that situation.
Is it possible to enable that? How?
However you can enable HA in VMs. If the host they are running on fails they will be restarted automatically on another host. NB. This *requires* power management so the failed host can be fenced.
Well, we don't have a Power-Management, right now... We have Dell hardware, but this PC (right now I just experiment, so I don't use "real servers") does not have any PM... Maybe there is an emulator or other solution to check how does it work? On the Servers we have Drac6, but oVirt has just Drac5 or Drac7... It does not work with Drac6? Thank you very much Mit freundlichen Grüßen Luca Bertoncello -- Besuchen Sie unsere Webauftritte: www.queo.biz Agentur für Markenführung und Kommunikation www.queoflow.com IT-Consulting und Individualsoftwareentwicklung Luca Bertoncello Administrator Telefon: +49 351 21 30 38 0 Fax: +49 351 21 30 38 99 E-Mail: l.bertoncello@queo-group.com queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077

On 17/09/15 15:44, Luca Bertoncello wrote:
Hello Alex
You can't live migrate on a host failure - as the host has gone down and all the running VMs on it have as well! It would require clairvoyance to enable live migration in that situation. Is it possible to enable that? How?
No, it's physically impossible. If your host running VM "foo" has unexpectedly gone down (ie the PSU failed), VM "foo" has also gone down. Therefore it is not possible to live migrate the VM "foo" since it is not longer running! Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to? The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine quux.bar.net is going to fail in 7 minutes"?
However you can enable HA in VMs. If the host they are running on fails they will be restarted automatically on another host. NB. This *requires* power management so the failed host can be fenced. Well, we don't have a Power-Management, right now... We have Dell hardware, but this PC (right now I just experiment, so I don't use "real servers") does not have any PM... Maybe there is an emulator or other solution to check how does it work?
You can use "Manual Fencing" but I don't even know if that is enabled in Ovirt. However it would need your intervention (Ie you are told that a host has failed, and asked, if it is not powered off, to go and power it off by hand. Once you have confirmed that is done, the VMs will start on the other host. The reason you have to fence the failed machine is to make sure you don't end up with two of the same VM running, which would completely corrupt the image on the shared storage. When using plain old libvirt, I have more than once accidentally started the same VM on two hosts, and the VM was utterly unrecoverable afterwards.
On the Servers we have Drac6, but oVirt has just Drac5 or Drac7... It does not work with Drac6?
I don't know. Maybe just try both, and run the test on the PM config tab? Otherwise how about picking up a second-hand APC network switched PDU off ebay? That is actually one of the best ways to enable fencing. Cheers Alex -- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. "Transact" is operated by Integrated Financial Arrangements plc. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). .

On Thu, Sep 17, 2015 at 6:00 PM, Alex Crow <acrow@integrafin.co.uk> wrote:
On 17/09/15 15:44, Luca Bertoncello wrote:
Hello Alex
You can't live migrate on a host failure - as the host has gone down and
all the running VMs on it have as well! It would require clairvoyance to enable live migration in that situation.
Is it possible to enable that? How?
No, it's physically impossible. If your host running VM "foo" has unexpectedly gone down (ie the PSU failed), VM "foo" has also gone down. Therefore it is not possible to live migrate the VM "foo" since it is not longer running!
- If the PSU failed, your UPS could alert you. If you have one... - If the machine is going down in an ordinary flow, surely it can be done.
Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to?
It could be suspended to disk (local) - if the disk is available. Then the decision if it is to be resumed from local disk or not (as it might be HA'ed and is running elsewhere) need to be taken later, of course.
The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine quux.bar.net is going to fail in 7 minutes"?
I completely agree there are situations in which you can't foresee the failure. But in many, you can. In those cases, it makes sense for the host to self-initiate 'move to maintenance' mode. The policy of what to do when 'self-moving-to-maintenance-mode' could be pre-fetched from the engine. Y.
However you can enable HA in VMs. If the host they are running on fails
they will be restarted automatically on another host. NB. This *requires* power management so the failed host can be fenced.
Well, we don't have a Power-Management, right now... We have Dell hardware, but this PC (right now I just experiment, so I don't use "real servers") does not have any PM... Maybe there is an emulator or other solution to check how does it work?
You can use "Manual Fencing" but I don't even know if that is enabled in Ovirt. However it would need your intervention (Ie you are told that a host has failed, and asked, if it is not powered off, to go and power it off by hand. Once you have confirmed that is done, the VMs will start on the other host.
The reason you have to fence the failed machine is to make sure you don't end up with two of the same VM running, which would completely corrupt the image on the shared storage. When using plain old libvirt, I have more than once accidentally started the same VM on two hosts, and the VM was utterly unrecoverable afterwards.
On the Servers we have Drac6, but oVirt has just Drac5 or Drac7... It does not work with Drac6?
I don't know. Maybe just try both, and run the test on the PM config tab?
Otherwise how about picking up a second-hand APC network switched PDU off ebay? That is actually one of the best ways to enable fencing.
Cheers
Alex
-- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. "Transact" is operated by Integrated Financial Arrangements plc. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856).
.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------030408080509030502030104 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit I don't really think this is practical:
- If the PSU failed, your UPS could alert you. If you have one...
If you have only one PSU in a host, a UPS is not going to stop you losing all the VMs on that host. OK, if you had N+1 PSUs, you may be able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a host into maintenance. Also a lot of people rely on low-cost white-box servers and decide that it's OK if a single PSU in a host dies, as, well, we have HA to start on other hosts. If they have N+1 PSUs in the hosts do they really have to migrate everything off? Swings and roundabouts really. I'm also not sure I've seen any practical DC setups where a UPS can monitor the load for every single attached physical machine and figure out that one of the redundant PSUs in it has failed - I'd love to know if there are as that would be really cool.
- If the machine is going down in an ordinary flow, surely it can be done.
Isn't that what "Maintenance mode" is for?
Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to?
It could be suspended to disk (local) - if the disk is available. Then the decision if it is to be resumed from local disk or not (as it might be HA'ed and is running elsewhere) need to be taken later, of course.
Yes, but that's not even remotely possible with Ovirt right now. I was trying to be practical as the OP has only just started using Ovirt and I think it might be a bit much to ask him to start coding up what he'd like.
The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine quux.bar.net <http://quux.bar.net> is going to fail in 7 minutes"?
I completely agree there are situations in which you can't foresee the failure. But in many, you can. In those cases, it makes sense for the host to self-initiate 'move to maintenance' mode. The policy of what to do when 'self-moving-to-maintenance-mode' could be pre-fetched from the engine. Y.
Hmm, I would love that to be true. But I've seen so many so called "corner-cases" that I now think the failure area in a datacenter is a fractal with infinite corners. Yes, you could monitor SMART on local drives, pick up uncorrected ECC errors, use "sensors" to check for sagging voltages or high temps, but I don't think you can ever hope to catch everything, and you could end up doing a migration "storm" for . I've had more than enough of "Enterprise Spec" switches suddenly going nuts and spamming corrupt MACs all over the LAN to know you can't ever account for everything. I think it's better to adopt the model of redundancy in software and services, so no-one even notices if a VM host goes away, there's always something else to take up the slack. Just like the origins of the Internet - the network should be dumb and the applications should cope with it! Any infrastructure that can't cope with the loss of a few VMs for a few minutes probably needs a refresh. Cheers Alex . --------------030408080509030502030104 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> I don't really think this is practical:<br> <br> <blockquote cite="mid:CAJgorsZ_8CXErXzt45TXarPqJ8SKfRL8-W4objPxL6xr4pm49Q@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div>- If the PSU failed, your UPS could alert you. If you have one...</div> </div> </div> </div> </blockquote> <br> If you have only one PSU in a host, a UPS is not going to stop you losing all the VMs on that host. OK, if you had N+1 PSUs, you may be able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a host into maintenance. Also a lot of people rely on low-cost white-box servers and decide that it's OK if a single PSU in a host dies, as, well, we have HA to start on other hosts. If they have N+1 PSUs in the hosts do they really have to migrate everything off? Swings and roundabouts really.<br> <br> I'm also not sure I've seen any practical DC setups where a UPS can monitor the load for every single attached physical machine and figure out that one of the redundant PSUs in it has failed - I'd love to know if there are as that would be really cool.<br> <br> <blockquote cite="mid:CAJgorsZ_8CXErXzt45TXarPqJ8SKfRL8-W4objPxL6xr4pm49Q@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div>- If the machine is going down in an ordinary flow, surely it can be done. <br> </div> </div> </div> </div> </blockquote> <br> Isn't that what "Maintenance mode" is for?<br> <br> <blockquote cite="mid:CAJgorsZ_8CXErXzt45TXarPqJ8SKfRL8-W4objPxL6xr4pm49Q@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to?<br> </blockquote> <div><br> </div> <div>It could be suspended to disk (local) - if the disk is available.</div> <div>Then the decision if it is to be resumed from local disk or not (as it might be HA'ed and is running elsewhere) need to be taken later, of course.</div> </div> </div> </div> </blockquote> <br> Yes, but that's not even remotely possible with Ovirt right now. I was trying to be practical as the OP has only just started using Ovirt and I think it might be a bit much to ask him to start coding up what he'd like.<br> <br> <blockquote cite="mid:CAJgorsZ_8CXErXzt45TXarPqJ8SKfRL8-W4objPxL6xr4pm49Q@mail.gmail.com" type="cite"> <div dir="ltr"> <div class="gmail_extra"> <div class="gmail_quote"> <div><br> </div> <div> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <br> The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine <a moz-do-not-send="true" href="http://quux.bar.net" rel="noreferrer" target="_blank">quux.bar.net</a> is going to fail in 7 minutes"?</blockquote> <div><br> </div> <div>I completely agree there are situations in which you can't foresee the failure. </div> <div>But in many, you can. In those cases, it makes sense for the host to self-initiate 'move to maintenance' mode. The policy of what to do when 'self-moving-to-maintenance-mode' could be pre-fetched from the engine.</div> <div>Y.</div> </div> </div> </div> </blockquote> <br> Hmm, I would love that to be true. But I've seen so many so called "corner-cases" that I now think the failure area in a datacenter is a fractal with infinite corners. Yes, you could monitor SMART on local drives, pick up uncorrected ECC errors, use "sensors" to check for sagging voltages or high temps, but I don't think you can ever hope to catch everything, and you could end up doing a migration "storm" for . I've had more than enough of "Enterprise Spec" switches suddenly going nuts and spamming corrupt MACs all over the LAN to know you can't ever account for everything.<br> <br> I think it's better to adopt the model of redundancy in software and services, so no-one even notices if a VM host goes away, there's always something else to take up the slack. Just like the origins of the Internet - the network should be dumb and the applications should cope with it! Any infrastructure that can't cope with the loss of a few VMs for a few minutes probably needs a refresh.<br> <br> Cheers<br> <br> Alex<br> <br> <br> <br> <br> <br> . <br> </body> </html> --------------030408080509030502030104--

There are PDU=E2=80=99s that you can monitor power draw per port and =
<o:p> </o:p></span></p><div><div =
This is a multipart message in MIME format. ------=_NextPart_000_0061_01D0F148.2CC6F4C0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable There are PDU=E2=80=99s that you can monitor power draw per port and = that would kind of tell you if a PSU failed as the load would be 0 =20 From: users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] On Behalf = Of Alex Crow Sent: Thursday, September 17, 2015 12:31 PM To: Yaniv Kaul <ykaul@redhat.com> Cc: users@ovirt.org Subject: Re: [ovirt-users] Automatically migrate VM between hosts in the = same cluster =20 I don't really think this is practical: - If the PSU failed, your UPS could alert you. If you have one... If you have only one PSU in a host, a UPS is not going to stop you = losing all the VMs on that host. OK, if you had N+1 PSUs, you may be = able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a = host into maintenance. Also a lot of people rely on low-cost white-box = servers and decide that it's OK if a single PSU in a host dies, as, = well, we have HA to start on other hosts. If they have N+1 PSUs in the = hosts do they really have to migrate everything off? Swings and = roundabouts really. I'm also not sure I've seen any practical DC setups where a UPS can = monitor the load for every single attached physical machine and figure = out that one of the redundant PSUs in it has failed - I'd love to know = if there are as that would be really cool. - If the machine is going down in an ordinary flow, surely it can be = done.=20 Isn't that what "Maintenance mode" is for? =C3=82=20 Even if it was a network failure and the host was still up, how would = you live migrate a VM from a host you can't even talk to? =20 It could be suspended to disk (local) - if the disk is available. Then the decision if it is to be resumed from local disk or not (as it = might be HA'ed and is running elsewhere) need to be taken later, of = course. Yes, but that's not even remotely possible with Ovirt right now. I was = trying to be practical as the OP has only just started using Ovirt and I = think it might be a bit much to ask him to start coding up what he'd = like. =20 =C3=82=20 The only way you could do it was if you somehow magically knew far = enough in advance that the host was about to fail (!) and that gave = enough time to migrate the machines off. But how would you ever know = that "machine quux.bar.net <http://quux.bar.net> is going to fail in 7 = minutes"? =20 I completely agree there are situations in which you can't foresee the = failure.=C3=82=20 But in many, you can. In those cases, it makes sense for the host to = self-initiate 'move to maintenance' mode. The policy of what to do when = 'self-moving-to-maintenance-mode' could be pre-fetched from the engine. Y. Hmm, I would love that to be true. But I've seen so many so called = "corner-cases" that I now think the failure area in a datacenter is a = fractal with infinite corners. Yes, you could monitor SMART on local = drives, pick up uncorrected ECC errors, use "sensors" to check for = sagging voltages or high temps, but I don't think you can ever hope to = catch everything, and you could end up doing a migration "storm" for . = I've had more than enough of "Enterprise Spec" switches suddenly going = nuts and spamming corrupt MACs all over the LAN to know you can't ever = account for everything. I think it's better to adopt the model of redundancy in software and = services, so no-one even notices if a VM host goes away, there's always = something else to take up the slack. Just like the origins of the = Internet - the network should be dumb and the applications should cope = with it! Any infrastructure that can't cope with the loss of a few VMs = for a few minutes probably needs a refresh. Cheers Alex .=20 ------=_NextPart_000_0061_01D0F148.2CC6F4C0 Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta = http-equiv=3DContent-Type content=3D"text/html; charset=3Dutf-8"><meta = name=3DGenerator content=3D"Microsoft Word 15 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0in; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif; color:black;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:8.5in 11.0in; margin:1.0in 1.0in 1.0in 1.0in;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body bgcolor=3Dwhite = lang=3DEN-US link=3Dblue vlink=3Dpurple><div class=3DWordSection1><p = class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D'= that would kind of tell you if a PSU failed as the load would be = 0<o:p></o:p></span></p><p class=3DMsoNormal><span = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D'= style=3D'border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in = 0in 0in'><p class=3DMsoNormal><b><span = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowte= xt'>From:</span></b><span = style=3D'font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowte= xt'> users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] <b>On = Behalf Of </b>Alex Crow<br><b>Sent:</b> Thursday, September 17, 2015 = 12:31 PM<br><b>To:</b> Yaniv Kaul <ykaul@redhat.com><br><b>Cc:</b> = users@ovirt.org<br><b>Subject:</b> Re: [ovirt-users] Automatically = migrate VM between hosts in the same = cluster<o:p></o:p></span></p></div></div><p = class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>I don't = really think this is practical:<br><br><br><o:p></o:p></p><blockquote = style=3D'margin-top:5.0pt;margin-bottom:5.0pt'><div><div><div><div><p = class=3DMsoNormal>- If the PSU failed, your UPS could alert you. If you = have one...<o:p></o:p></p></div></div></div></div></blockquote><p = class=3DMsoNormal><br>If you have only one PSU in a host, a UPS is not = going to stop you losing all the VMs on that host. OK, if you had N+1 = PSUs, you may be able to monitor for this (IPMI/LOM/DRAC etc)and use the = API to put a host into maintenance. Also a lot of people rely on = low-cost white-box servers and decide that it's OK if a single PSU in a = host dies, as, well, we have HA to start on other hosts. If they have = N+1 PSUs in the hosts do they really have to migrate everything off? = Swings and roundabouts really.<br><br>I'm also not sure I've seen any = practical DC setups where a UPS can monitor the load for every single = attached physical machine and figure out that one of the redundant PSUs = in it has failed - I'd love to know if there are as that would be really = cool.<br><br><br><o:p></o:p></p><blockquote = style=3D'margin-top:5.0pt;margin-bottom:5.0pt'><div><div><div><div><p = class=3DMsoNormal>- If the machine is going down in an ordinary flow, = surely it can be done. = <o:p></o:p></p></div></div></div></div></blockquote><p = class=3DMsoNormal><br>Isn't that what "Maintenance mode" is = for?<br><br><br><o:p></o:p></p><blockquote = style=3D'margin-top:5.0pt;margin-bottom:5.0pt'><div><div><div><div><p = class=3DMsoNormal>=C3=82 <o:p></o:p></p></div><blockquote = style=3D'border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in = 6.0pt;margin-left:4.8pt;margin-right:0in'><p class=3DMsoNormal><br>Even = if it was a network failure and the host was still up, how would you = live migrate a VM from a host you can't even talk = to?<o:p></o:p></p></blockquote><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal>It could be suspended to disk (local) - if the disk is = available.<o:p></o:p></p></div><div><p class=3DMsoNormal>Then the = decision if it is to be resumed from local disk or not (as it might be = HA'ed and is running elsewhere) need to be taken later, of = course.<o:p></o:p></p></div></div></div></div></blockquote><p = class=3DMsoNormal><br>Yes, but that's not even remotely possible with = Ovirt right now. I was trying to be practical as the OP has only just = started using Ovirt and I think it might be a bit much to ask him to = start coding up what he'd like.<br><br><br><o:p></o:p></p><blockquote = style=3D'margin-top:5.0pt;margin-bottom:5.0pt'><div><div><div><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p = class=3DMsoNormal>=C3=82 <o:p></o:p></p></div><blockquote = style=3D'border:none;border-left:solid #CCCCCC 1.0pt;padding:0in 0in 0in = 6.0pt;margin-left:4.8pt;margin-right:0in'><p class=3DMsoNormal><br>The = only way you could do it was if you somehow magically knew far enough in = advance that the host was about to fail (!) and that gave enough time to = migrate the machines off. But how would you ever know that "machine = <a href=3D"http://quux.bar.net" target=3D"_blank">quux.bar.net</a> is = going to fail in 7 minutes"?<o:p></o:p></p></blockquote><div><p = class=3DMsoNormal><o:p> </o:p></p></div><div><p class=3DMsoNormal>I = completely agree there are situations in which you can't foresee the = failure.=C3=82 <o:p></o:p></p></div><div><p class=3DMsoNormal>But = in many, you can. In those cases, it makes sense for the host to = self-initiate 'move to maintenance' mode. The policy of what to do when = 'self-moving-to-maintenance-mode' could be pre-fetched from the = engine.<o:p></o:p></p></div><div><p = class=3DMsoNormal>Y.<o:p></o:p></p></div></div></div></div></blockquote><= p class=3DMsoNormal><br>Hmm, I would love that to be true. But I've seen = so many so called "corner-cases" that I now think the failure = area in a datacenter is a fractal with infinite corners. Yes, you could = monitor SMART on local drives, pick up uncorrected ECC errors, use = "sensors" to check for sagging voltages or high temps, but I = don't think you can ever hope to catch everything, and you could end up = doing a migration "storm" for . I've had more than enough of = "Enterprise Spec" switches suddenly going nuts and spamming = corrupt MACs all over the LAN to know you can't ever account for = everything.<br><br>I think it's better to adopt the model of redundancy = in software and services, so no-one even notices if a VM host goes away, = there's always something else to take up the slack. Just like the = origins of the Internet - the network should be dumb and the = applications should cope with it! Any infrastructure that can't cope = with the loss of a few VMs for a few minutes probably needs a = refresh.<br><br>Cheers<br><br>Alex<br><br><br><br><br><br>. = <o:p></o:p></p></div></body></html> ------=_NextPart_000_0061_01D0F148.2CC6F4C0--

--_000_DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13Csphinxqueoloc_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 SGkgYWxsLA0KDQp0aGFuayB5b3UgdmVyeSBtdWNoIGZvciB5b3VyIGFuc3dlcnMuDQpTbzoNCg0K DQoxKSAgICAgIE9mIGNvdXJzZSwgd2UgaGF2ZSBVUFMuIE1vcmUgdGhhbiBvbmUsIGluIG91ciBz ZXJ2ZXIgcm9vbSwgYW5kIG9mIGNvdXJzZSB0aGV5IHdpbGwgc2VuZCBhbiBhZHZpY2UgdG8gdGhl IGhvc3QgaWYgdGhleSBhcmUgb24gYmF0dGVyeQ0KDQoyKSAgICAgIE15IHF1ZXN0aW9uIHdhczog 4oCcd2hhdCBjYW4gSSBkbywgc28gdGhhdCBpbiBjYXNlIG9mIEtlcm5lbCBQYW5pYyBvciBzaW1p bGFyLCB0aGUgVk0gd2lsbCBiZSBtaWdyYXRlZCAobGl2ZSBvciBub3QpIHRvIGFub3RoZXIgaG9z dD/igJ0NCg0KMykgICAgICBJ4oCZZCBsaWtlIHRvIGhhdmUgYSBzaHV0ZG93bi1zY3JpcHQgb24g dGhlIGhvc3QgdGhhdCBwdXQgdGhlIGhvc3QgaW4gTWFpbnRlbmFuY2UgYW5kIHdhaXQgdW50aWwg aXTigJlzIGRvbmUsIHNvIHRoYXQgSSBjYW4ganVzdCBzaHV0ZG93biBvciByZWJvb3QgaXQgd2l0 aG91dCBhbnkgb3RoZXIgYWN0aW9uLiBJcyBpdCBwb3NzaWJsZT8gSXQgd291bGQgaGVscCB0byBt YW5hZ2UgdGhlIHBvd2VyIGZhaWx1cmUsIHRvbywgYXNzdW1pbmcgdGhhdCBvdGhlciBob3N0cyBo YXZlIGJldHRlciBVUFMgKGl0IGNhbiBiZSBwb3NzaWJsZeKApikNCg0KVGhhbmtzIGEgbG90DQoN Ck1pdCBmcmV1bmRsaWNoZW4gR3LDvMOfZW4NCg0KTHVjYSBCZXJ0b25jZWxsbw0KDQotLQ0KQmVz dWNoZW4gU2llIHVuc2VyZSBXZWJhdWZ0cml0dGU6DQp3d3cucXVlby5iaXo8aHR0cDovL3d3dy5x dWVvLmJpei8+DQoNCkFnZW50dXIgZsO8ciBNYXJrZW5mw7xocnVuZyB1bmQgS29tbXVuaWthdGlv bg0KDQp3d3cucXVlb2Zsb3cuY29tPGh0dHA6Ly93d3cucXVlb2Zsb3cuY29tLz4NCg0KSVQtQ29u c3VsdGluZyB1bmQgSW5kaXZpZHVhbHNvZnR3YXJlZW50d2lja2x1bmcNCg0KDQpMdWNhIEJlcnRv bmNlbGxvDQpBZG1pbmlzdHJhdG9yDQpUZWxlZm9uOg0KDQorNDkgMzUxIDIxIDMwIDM4IDANCg0K RmF4Og0KDQorNDkgMzUxIDIxIDMwIDM4IDk5DQoNCkUtTWFpbDoNCg0KbC5iZXJ0b25jZWxsb0Bx dWVvLWdyb3VwLmNvbTxtYWlsdG86bC5iZXJ0b25jZWxsb0BxdWVvLWdyb3VwLmNvbT4NCg0KDQpx dWVvIEdtYkgNClRoYXJhbmR0ZXIgU3RyLiAxMw0KMDExNTkgRHJlc2Rlbg0KU2l0eiBkZXIgR2Vz ZWxsc2NoYWZ0OiBEcmVzZGVuDQpIYW5kZWxzcmVnaXN0ZXJlaW50cmFnOiBBbXRzZ2VyaWNodCBE cmVzZGVuIEhSQiAyMjM1Mg0KR2VzY2jDpGZ0c2bDvGhyZXI6IFLDvGRpZ2VyIEhlbmtlLCBBbmRy w6kgUGlua2VydA0KVVN0LUlkTnIuOiBERTIzNDIyMDA3Nw0KDQoNCkZyb206IHVzZXJzLWJvdW5j ZXNAb3ZpcnQub3JnIFttYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmddIE9uIEJlaGFsZiBP ZiBtYXR0aGV3IGxhZ29lDQpTZW50OiBUaHVyc2RheSwgU2VwdGVtYmVyIDE3LCAyMDE1IDk6NTYg UE0NClRvOiAnQWxleCBDcm93JzsgJ1lhbml2IEthdWwnDQpDYzogdXNlcnNAb3ZpcnQub3JnDQpT dWJqZWN0OiBSZTogW292aXJ0LXVzZXJzXSBBdXRvbWF0aWNhbGx5IG1pZ3JhdGUgVk0gYmV0d2Vl biBob3N0cyBpbiB0aGUgc2FtZSBjbHVzdGVyDQoNClRoZXJlIGFyZSBQRFXigJlzIHRoYXQgeW91 IGNhbiBtb25pdG9yIHBvd2VyIGRyYXcgcGVyIHBvcnQgYW5kIHRoYXQgd291bGQga2luZCBvZiB0 ZWxsIHlvdSBpZiBhIFBTVSBmYWlsZWQgYXMgdGhlIGxvYWQgd291bGQgYmUgMA0KDQpGcm9tOiB1 c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZzxtYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmc+IFtt YWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmddIE9uIEJlaGFsZiBPZiBBbGV4IENyb3cNClNl bnQ6IFRodXJzZGF5LCBTZXB0ZW1iZXIgMTcsIDIwMTUgMTI6MzEgUE0NClRvOiBZYW5pdiBLYXVs IDx5a2F1bEByZWRoYXQuY29tPG1haWx0bzp5a2F1bEByZWRoYXQuY29tPj4NCkNjOiB1c2Vyc0Bv dmlydC5vcmc8bWFpbHRvOnVzZXJzQG92aXJ0Lm9yZz4NClN1YmplY3Q6IFJlOiBbb3ZpcnQtdXNl cnNdIEF1dG9tYXRpY2FsbHkgbWlncmF0ZSBWTSBiZXR3ZWVuIGhvc3RzIGluIHRoZSBzYW1lIGNs dXN0ZXINCg0KSSBkb24ndCByZWFsbHkgdGhpbmsgdGhpcyBpcyBwcmFjdGljYWw6DQoNCi0gSWYg dGhlIFBTVSBmYWlsZWQsIHlvdXIgVVBTIGNvdWxkIGFsZXJ0IHlvdS4gSWYgeW91IGhhdmUgb25l Li4uDQoNCklmIHlvdSBoYXZlIG9ubHkgb25lIFBTVSBpbiBhIGhvc3QsIGEgVVBTIGlzIG5vdCBn b2luZyB0byBzdG9wIHlvdSBsb3NpbmcgYWxsIHRoZSBWTXMgb24gdGhhdCBob3N0LiBPSywgaWYg eW91IGhhZCBOKzEgUFNVcywgeW91IG1heSBiZSBhYmxlIHRvIG1vbml0b3IgZm9yIHRoaXMgKElQ TUkvTE9NL0RSQUMgZXRjKWFuZCB1c2UgdGhlIEFQSSB0byBwdXQgYSBob3N0IGludG8gbWFpbnRl bmFuY2UuIEFsc28gYSBsb3Qgb2YgcGVvcGxlIHJlbHkgb24gbG93LWNvc3Qgd2hpdGUtYm94IHNl cnZlcnMgYW5kIGRlY2lkZSB0aGF0IGl0J3MgT0sgaWYgYSBzaW5nbGUgUFNVIGluIGEgaG9zdCBk aWVzLCBhcywgd2VsbCwgd2UgaGF2ZSBIQSB0byBzdGFydCBvbiBvdGhlciBob3N0cy4gSWYgdGhl eSBoYXZlIE4rMSBQU1VzIGluIHRoZSBob3N0cyBkbyB0aGV5IHJlYWxseSBoYXZlIHRvIG1pZ3Jh dGUgZXZlcnl0aGluZyBvZmY/IFN3aW5ncyBhbmQgcm91bmRhYm91dHMgcmVhbGx5Lg0KDQpJJ20g YWxzbyBub3Qgc3VyZSBJJ3ZlIHNlZW4gYW55IHByYWN0aWNhbCBEQyBzZXR1cHMgd2hlcmUgYSBV UFMgY2FuIG1vbml0b3IgdGhlIGxvYWQgZm9yIGV2ZXJ5IHNpbmdsZSBhdHRhY2hlZCBwaHlzaWNh bCBtYWNoaW5lIGFuZCBmaWd1cmUgb3V0IHRoYXQgb25lIG9mIHRoZSByZWR1bmRhbnQgUFNVcyBp biBpdCBoYXMgZmFpbGVkIC0gSSdkIGxvdmUgdG8ga25vdyBpZiB0aGVyZSBhcmUgYXMgdGhhdCB3 b3VsZCBiZSByZWFsbHkgY29vbC4NCg0KLSBJZiB0aGUgbWFjaGluZSBpcyBnb2luZyBkb3duIGlu IGFuIG9yZGluYXJ5IGZsb3csIHN1cmVseSBpdCBjYW4gYmUgZG9uZS4NCg0KSXNuJ3QgdGhhdCB3 aGF0ICJNYWludGVuYW5jZSBtb2RlIiBpcyBmb3I/DQoNCsOCDQoNCkV2ZW4gaWYgaXQgd2FzIGEg bmV0d29yayBmYWlsdXJlIGFuZCB0aGUgaG9zdCB3YXMgc3RpbGwgdXAsIGhvdyB3b3VsZCB5b3Ug bGl2ZSBtaWdyYXRlIGEgVk0gZnJvbSBhIGhvc3QgeW91IGNhbid0IGV2ZW4gdGFsayB0bz8NCg0K SXQgY291bGQgYmUgc3VzcGVuZGVkIHRvIGRpc2sgKGxvY2FsKSAtIGlmIHRoZSBkaXNrIGlzIGF2 YWlsYWJsZS4NClRoZW4gdGhlIGRlY2lzaW9uIGlmIGl0IGlzIHRvIGJlIHJlc3VtZWQgZnJvbSBs b2NhbCBkaXNrIG9yIG5vdCAoYXMgaXQgbWlnaHQgYmUgSEEnZWQgYW5kIGlzIHJ1bm5pbmcgZWxz ZXdoZXJlKSBuZWVkIHRvIGJlIHRha2VuIGxhdGVyLCBvZiBjb3Vyc2UuDQoNClllcywgYnV0IHRo YXQncyBub3QgZXZlbiByZW1vdGVseSBwb3NzaWJsZSB3aXRoIE92aXJ0IHJpZ2h0IG5vdy4gSSB3 YXMgdHJ5aW5nIHRvIGJlIHByYWN0aWNhbCBhcyB0aGUgT1AgaGFzIG9ubHkganVzdCBzdGFydGVk IHVzaW5nIE92aXJ0IGFuZCBJIHRoaW5rIGl0IG1pZ2h0IGJlIGEgYml0IG11Y2ggdG8gYXNrIGhp bSB0byBzdGFydCBjb2RpbmcgdXAgd2hhdCBoZSdkIGxpa2UuDQoNCg0Kw4INCg0KVGhlIG9ubHkg d2F5IHlvdSBjb3VsZCBkbyBpdCB3YXMgaWYgeW91IHNvbWVob3cgbWFnaWNhbGx5IGtuZXcgZmFy IGVub3VnaCBpbiBhZHZhbmNlIHRoYXQgdGhlIGhvc3Qgd2FzIGFib3V0IHRvIGZhaWwgKCEpIGFu ZCB0aGF0IGdhdmUgZW5vdWdoIHRpbWUgdG8gbWlncmF0ZSB0aGUgbWFjaGluZXMgb2ZmLiBCdXQg aG93IHdvdWxkIHlvdSBldmVyIGtub3cgdGhhdCAibWFjaGluZSBxdXV4LmJhci5uZXQ8aHR0cDov L3F1dXguYmFyLm5ldD4gaXMgZ29pbmcgdG8gZmFpbCBpbiA3IG1pbnV0ZXMiPw0KDQpJIGNvbXBs ZXRlbHkgYWdyZWUgdGhlcmUgYXJlIHNpdHVhdGlvbnMgaW4gd2hpY2ggeW91IGNhbid0IGZvcmVz ZWUgdGhlIGZhaWx1cmUuw4INCkJ1dCBpbiBtYW55LCB5b3UgY2FuLiBJbiB0aG9zZSBjYXNlcywg aXQgbWFrZXMgc2Vuc2UgZm9yIHRoZSBob3N0IHRvIHNlbGYtaW5pdGlhdGUgJ21vdmUgdG8gbWFp bnRlbmFuY2UnIG1vZGUuIFRoZSBwb2xpY3kgb2Ygd2hhdCB0byBkbyB3aGVuICdzZWxmLW1vdmlu Zy10by1tYWludGVuYW5jZS1tb2RlJyBjb3VsZCBiZSBwcmUtZmV0Y2hlZCBmcm9tIHRoZSBlbmdp bmUuDQpZLg0KDQpIbW0sIEkgd291bGQgbG92ZSB0aGF0IHRvIGJlIHRydWUuIEJ1dCBJJ3ZlIHNl ZW4gc28gbWFueSBzbyBjYWxsZWQgImNvcm5lci1jYXNlcyIgdGhhdCBJIG5vdyB0aGluayB0aGUg ZmFpbHVyZSBhcmVhIGluIGEgZGF0YWNlbnRlciBpcyBhIGZyYWN0YWwgd2l0aCBpbmZpbml0ZSBj b3JuZXJzLiBZZXMsIHlvdSBjb3VsZCBtb25pdG9yIFNNQVJUIG9uIGxvY2FsIGRyaXZlcywgcGlj ayB1cCB1bmNvcnJlY3RlZCBFQ0MgZXJyb3JzLCB1c2UgInNlbnNvcnMiIHRvIGNoZWNrIGZvciBz YWdnaW5nIHZvbHRhZ2VzIG9yIGhpZ2ggdGVtcHMsIGJ1dCBJIGRvbid0IHRoaW5rIHlvdSBjYW4g ZXZlciBob3BlIHRvIGNhdGNoIGV2ZXJ5dGhpbmcsIGFuZCB5b3UgY291bGQgZW5kIHVwIGRvaW5n IGEgbWlncmF0aW9uICJzdG9ybSIgZm9yIC4gSSd2ZSBoYWQgbW9yZSB0aGFuIGVub3VnaCBvZiAi RW50ZXJwcmlzZSBTcGVjIiBzd2l0Y2hlcyBzdWRkZW5seSBnb2luZyBudXRzIGFuZCBzcGFtbWlu ZyBjb3JydXB0IE1BQ3MgYWxsIG92ZXIgdGhlIExBTiB0byBrbm93IHlvdSBjYW4ndCBldmVyIGFj Y291bnQgZm9yIGV2ZXJ5dGhpbmcuDQoNCkkgdGhpbmsgaXQncyBiZXR0ZXIgdG8gYWRvcHQgdGhl IG1vZGVsIG9mIHJlZHVuZGFuY3kgaW4gc29mdHdhcmUgYW5kIHNlcnZpY2VzLCBzbyBuby1vbmUg ZXZlbiBub3RpY2VzIGlmIGEgVk0gaG9zdCBnb2VzIGF3YXksIHRoZXJlJ3MgYWx3YXlzIHNvbWV0 aGluZyBlbHNlIHRvIHRha2UgdXAgdGhlIHNsYWNrLiBKdXN0IGxpa2UgdGhlIG9yaWdpbnMgb2Yg dGhlIEludGVybmV0IC0gdGhlIG5ldHdvcmsgc2hvdWxkIGJlIGR1bWIgYW5kIHRoZSBhcHBsaWNh dGlvbnMgc2hvdWxkIGNvcGUgd2l0aCBpdCEgQW55IGluZnJhc3RydWN0dXJlIHRoYXQgY2FuJ3Qg Y29wZSB3aXRoIHRoZSBsb3NzIG9mIGEgZmV3IFZNcyBmb3IgYSBmZXcgbWludXRlcyBwcm9iYWJs eSBuZWVkcyBhIHJlZnJlc2guDQoNCkNoZWVycw0KDQpBbGV4DQoNCg0KDQoNCg0KLg0K --_000_DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13Csphinxqueoloc_ Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: base64 PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv VFIvUkVDLWh0bWw0MCI+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj1Db250ZW50LVR5cGUgY29udGVu dD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij48bWV0YSBuYW1lPUdlbmVyYXRvciBjb250ZW50 PSJNaWNyb3NvZnQgV29yZCAxNCAoZmlsdGVyZWQgbWVkaXVtKSI+PHN0eWxlPjwhLS0NCi8qIEZv bnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6Q2FsaWJyaTsNCglw YW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5 OlRhaG9tYTsNCglwYW5vc2UtMToyIDExIDYgNCAzIDUgNCA0IDIgNDt9DQovKiBTdHlsZSBEZWZp bml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwgZGl2Lk1zb05vcm1hbA0KCXtt YXJnaW46MGNtOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTIuMHB0Ow0K CWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLCJzZXJpZiI7DQoJY29sb3I6YmxhY2s7fQ0K YTpsaW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29s b3I6Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30NCmE6dmlzaXRlZCwgc3Bhbi5N c29IeXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6cHVy cGxlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KcC5Nc29BY2V0YXRlLCBsaS5Nc29B Y2V0YXRlLCBkaXYuTXNvQWNldGF0ZQ0KCXttc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJbXNvLXN0 eWxlLWxpbms6IlNwcmVjaGJsYXNlbnRleHQgWmNobiI7DQoJbWFyZ2luOjBjbTsNCgltYXJnaW4t Ym90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjguMHB0Ow0KCWZvbnQtZmFtaWx5OiJUYWhvbWEi LCJzYW5zLXNlcmlmIjsNCgljb2xvcjpibGFjazt9DQpwLk1zb0xpc3RQYXJhZ3JhcGgsIGxpLk1z b0xpc3RQYXJhZ3JhcGgsIGRpdi5Nc29MaXN0UGFyYWdyYXBoDQoJe21zby1zdHlsZS1wcmlvcml0 eTozNDsNCgltYXJnaW4tdG9wOjBjbTsNCgltYXJnaW4tcmlnaHQ6MGNtOw0KCW1hcmdpbi1ib3R0 b206MGNtOw0KCW1hcmdpbi1sZWZ0OjM2LjBwdDsNCgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJ Zm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFuIiwic2VyaWYi Ow0KCWNvbG9yOmJsYWNrO30NCnNwYW4uRS1NYWlsRm9ybWF0dm9ybGFnZTE3DQoJe21zby1zdHls ZS10eXBlOnBlcnNvbmFsOw0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7DQoJ Y29sb3I6IzFGNDk3RDt9DQpzcGFuLkUtTWFpbEZvcm1hdHZvcmxhZ2UxOA0KCXttc28tc3R5bGUt dHlwZTpwZXJzb25hbC1yZXBseTsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYi Ow0KCWNvbG9yOiMxRjQ5N0Q7fQ0Kc3Bhbi5TcHJlY2hibGFzZW50ZXh0WmNobg0KCXttc28tc3R5 bGUtbmFtZToiU3ByZWNoYmxhc2VudGV4dCBaY2huIjsNCgltc28tc3R5bGUtcHJpb3JpdHk6OTk7 DQoJbXNvLXN0eWxlLWxpbms6U3ByZWNoYmxhc2VudGV4dDsNCglmb250LWZhbWlseToiVGFob21h Iiwic2Fucy1zZXJpZiI7DQoJY29sb3I6YmxhY2s7fQ0KLk1zb0NocERlZmF1bHQNCgl7bXNvLXN0 eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJZm9udC1zaXplOjEwLjBwdDt9DQpAcGFnZSBXb3JkU2Vj dGlvbjENCgl7c2l6ZTo2MTIuMHB0IDc5Mi4wcHQ7DQoJbWFyZ2luOjcyLjBwdCA3Mi4wcHQgNzIu MHB0IDcyLjBwdDt9DQpkaXYuV29yZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30NCi8q IExpc3QgRGVmaW5pdGlvbnMgKi8NCkBsaXN0IGwwDQoJe21zby1saXN0LWlkOjE1NDI1MjMwMTg7 DQoJbXNvLWxpc3QtdHlwZTpoeWJyaWQ7DQoJbXNvLWxpc3QtdGVtcGxhdGUtaWRzOi0yOTM1ODI1 NTIgNjc1Njc2MzMgNjc1Njc2NDEgNjc1Njc2NDMgNjc1Njc2MzEgNjc1Njc2NDEgNjc1Njc2NDMg Njc1Njc2MzEgNjc1Njc2NDEgNjc1Njc2NDM7fQ0KQGxpc3QgbDA6bGV2ZWwxDQoJe21zby1sZXZl bC10ZXh0OiIlMVwpIjsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVt YmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LTE4LjBwdDt9DQpAbGlzdCBsMDpsZXZl bDINCgl7bXNvLWxldmVsLW51bWJlci1mb3JtYXQ6YWxwaGEtbG93ZXI7DQoJbXNvLWxldmVsLXRh Yi1zdG9wOm5vbmU7DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5k ZW50Oi0xOC4wcHQ7fQ0KQGxpc3QgbDA6bGV2ZWwzDQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0 OnJvbWFuLWxvd2VyOw0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1i ZXItcG9zaXRpb246cmlnaHQ7DQoJdGV4dC1pbmRlbnQ6LTkuMHB0O30NCkBsaXN0IGwwOmxldmVs NA0KCXttc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9u OmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LTE4LjBwdDt9DQpAbGlzdCBsMDpsZXZlbDUNCgl7bXNvLWxl dmVsLW51bWJlci1mb3JtYXQ6YWxwaGEtbG93ZXI7DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7 DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0xOC4wcHQ7 fQ0KQGxpc3QgbDA6bGV2ZWw2DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OnJvbWFuLWxvd2Vy Ow0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246 cmlnaHQ7DQoJdGV4dC1pbmRlbnQ6LTkuMHB0O30NCkBsaXN0IGwwOmxldmVsNw0KCXttc28tbGV2 ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4 dC1pbmRlbnQ6LTE4LjBwdDt9DQpAbGlzdCBsMDpsZXZlbDgNCgl7bXNvLWxldmVsLW51bWJlci1m b3JtYXQ6YWxwaGEtbG93ZXI7DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNvLWxldmVs LW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0xOC4wcHQ7fQ0KQGxpc3QgbDA6 bGV2ZWw5DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OnJvbWFuLWxvd2VyOw0KCW1zby1sZXZl bC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246cmlnaHQ7DQoJdGV4 dC1pbmRlbnQ6LTkuMHB0O30NCm9sDQoJe21hcmdpbi1ib3R0b206MGNtO30NCnVsDQoJe21hcmdp bi1ib3R0b206MGNtO30NCi0tPjwvc3R5bGU+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpz aGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlkbWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5k aWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRp dCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48 L3htbD48IVtlbmRpZl0tLT48L2hlYWQ+PGJvZHkgYmdjb2xvcj13aGl0ZSBsYW5nPURFIGxpbms9 Ymx1ZSB2bGluaz1wdXJwbGU+PGRpdiBjbGFzcz1Xb3JkU2VjdGlvbjE+PHAgY2xhc3M9TXNvTm9y bWFsPjxzcGFuIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwi c2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3RCc+SGkgYWxsLDxvOnA+PC9vOnA+PC9zcGFuPjwvcD48 cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p bHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz48bzpwPiZuYnNwOzwvbzpw Pjwvc3Bhbj48L3A+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFuIGxhbmc9RU4tVVMgc3R5bGU9J2Zv bnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjoj MUY0OTdEJz50aGFuayB5b3UgdmVyeSBtdWNoIGZvciB5b3VyIGFuc3dlcnMuPG86cD48L286cD48 L3NwYW4+PC9wPjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTIHN0eWxlPSdmb250 LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Y29sb3I6IzFG NDk3RCc+U286PG86cD48L286cD48L3NwYW4+PC9wPjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBs YW5nPUVOLVVTIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwi c2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3RCc+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPjxw IGNsYXNzPU1zb0xpc3RQYXJhZ3JhcGggc3R5bGU9J3RleHQtaW5kZW50Oi0xOC4wcHQ7bXNvLWxp c3Q6bDAgbGV2ZWwxIGxmbzEnPjwhW2lmICFzdXBwb3J0TGlzdHNdPjxzcGFuIGxhbmc9RU4tVVMg c3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlm Ijtjb2xvcjojMUY0OTdEJz48c3BhbiBzdHlsZT0nbXNvLWxpc3Q6SWdub3JlJz4xKTxzcGFuIHN0 eWxlPSdmb250OjcuMHB0ICJUaW1lcyBOZXcgUm9tYW4iJz4mbmJzcDsmbmJzcDsmbmJzcDsmbmJz cDsmbmJzcDsgPC9zcGFuPjwvc3Bhbj48L3NwYW4+PCFbZW5kaWZdPjxzcGFuIGxhbmc9RU4tVVMg c3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlm Ijtjb2xvcjojMUY0OTdEJz5PZiBjb3Vyc2UsIHdlIGhhdmUgVVBTLiBNb3JlIHRoYW4gb25lLCBp biBvdXIgc2VydmVyIHJvb20sIGFuZCBvZiBjb3Vyc2UgdGhleSB3aWxsIHNlbmQgYW4gYWR2aWNl IHRvIHRoZSBob3N0IGlmIHRoZXkgYXJlIG9uIGJhdHRlcnk8bzpwPjwvbzpwPjwvc3Bhbj48L3A+ PHAgY2xhc3M9TXNvTGlzdFBhcmFncmFwaCBzdHlsZT0ndGV4dC1pbmRlbnQ6LTE4LjBwdDttc28t bGlzdDpsMCBsZXZlbDEgbGZvMSc+PCFbaWYgIXN1cHBvcnRMaXN0c10+PHNwYW4gbGFuZz1FTi1V UyBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2Vy aWYiO2NvbG9yOiMxRjQ5N0QnPjxzcGFuIHN0eWxlPSdtc28tbGlzdDpJZ25vcmUnPjIpPHNwYW4g c3R5bGU9J2ZvbnQ6Ny4wcHQgIlRpbWVzIE5ldyBSb21hbiInPiZuYnNwOyZuYnNwOyZuYnNwOyZu YnNwOyZuYnNwOyA8L3NwYW4+PC9zcGFuPjwvc3Bhbj48IVtlbmRpZl0+PHNwYW4gbGFuZz1FTi1V UyBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2Vy aWYiO2NvbG9yOiMxRjQ5N0QnPk15IHF1ZXN0aW9uIHdhczog4oCcd2hhdCBjYW4gSSBkbywgc28g dGhhdCBpbiBjYXNlIG9mIEtlcm5lbCBQYW5pYyBvciBzaW1pbGFyLCB0aGUgVk0gd2lsbCBiZSBt aWdyYXRlZCAobGl2ZSBvciBub3QpIHRvIGFub3RoZXIgaG9zdD/igJ08bzpwPjwvbzpwPjwvc3Bh bj48L3A+PHAgY2xhc3M9TXNvTGlzdFBhcmFncmFwaCBzdHlsZT0ndGV4dC1pbmRlbnQ6LTE4LjBw dDttc28tbGlzdDpsMCBsZXZlbDEgbGZvMSc+PCFbaWYgIXN1cHBvcnRMaXN0c10+PHNwYW4gbGFu Zz1FTi1VUyBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNh bnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPjxzcGFuIHN0eWxlPSdtc28tbGlzdDpJZ25vcmUnPjMp PHNwYW4gc3R5bGU9J2ZvbnQ6Ny4wcHQgIlRpbWVzIE5ldyBSb21hbiInPiZuYnNwOyZuYnNwOyZu YnNwOyZuYnNwOyZuYnNwOyA8L3NwYW4+PC9zcGFuPjwvc3Bhbj48IVtlbmRpZl0+PHNwYW4gbGFu Zz1FTi1VUyBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNh bnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPknigJlkIGxpa2UgdG8gaGF2ZSBhIHNodXRkb3duLXNj cmlwdCBvbiB0aGUgaG9zdCB0aGF0IHB1dCB0aGUgaG9zdCBpbiBNYWludGVuYW5jZSBhbmQgd2Fp dCB1bnRpbCBpdOKAmXMgZG9uZSwgc28gdGhhdCBJIGNhbiBqdXN0IHNodXRkb3duIG9yIHJlYm9v dCBpdCB3aXRob3V0IGFueSBvdGhlciBhY3Rpb24uIElzIGl0IHBvc3NpYmxlPyBJdCB3b3VsZCBo ZWxwIHRvIG1hbmFnZSB0aGUgcG93ZXIgZmFpbHVyZSwgdG9vLCBhc3N1bWluZyB0aGF0IG90aGVy IGhvc3RzIGhhdmUgYmV0dGVyIFVQUyAoaXQgY2FuIGJlIHBvc3NpYmxl4oCmKTxvOnA+PC9vOnA+ PC9zcGFuPjwvcD48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUyBzdHlsZT0nZm9u dC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOiMx RjQ5N0QnPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD48cCBjbGFzcz1Nc29Ob3JtYWw+PHNw YW4gbGFuZz1FTi1VUyBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJy aSIsInNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPlRoYW5rcyBhIGxvdDxvOnA+PC9vOnA+PC9z cGFuPjwvcD48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7 Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz48bzpwPiZu YnNwOzwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9TXNvTm9ybWFsIHN0eWxlPSdtYXJnaW4tYm90 dG9tOjEyLjBwdCc+PHNwYW4gc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNh bGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz5NaXQgZnJldW5kbGljaGVuIEdyw7zD n2VuPGJyPjxicj5MdWNhIEJlcnRvbmNlbGxvPGJyPjxicj4tLSA8YnI+QmVzdWNoZW4gU2llIHVu c2VyZSBXZWJhdWZ0cml0dGU6PG86cD48L286cD48L3NwYW4+PC9wPjx0YWJsZSBjbGFzcz1Nc29O b3JtYWxUYWJsZSBib3JkZXI9MCBjZWxsc3BhY2luZz0wIGNlbGxwYWRkaW5nPTA+PHRyPjx0ZCBz dHlsZT0ncGFkZGluZzowY20gMGNtIDBjbSAwY20nPjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBz dHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYi O2NvbG9yOiMxRjQ5N0QnPjxhIGhyZWY9Imh0dHA6Ly93d3cucXVlby5iaXovIj53d3cucXVlby5i aXo8L2E+PG86cD48L286cD48L3NwYW4+PC9wPjwvdGQ+PHRkIHN0eWxlPSdwYWRkaW5nOjBjbSAw Y20gMGNtIDBjbSc+PHAgY2xhc3M9TXNvTm9ybWFsIHN0eWxlPSdtYXJnaW4tbGVmdDozLjc1cHQn PjxzcGFuIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fu cy1zZXJpZiI7Y29sb3I6IzFGNDk3RCc+QWdlbnR1ciBmw7xyIE1hcmtlbmbDvGhydW5nIHVuZCBL b21tdW5pa2F0aW9uPG86cD48L286cD48L3NwYW4+PC9wPjwvdGQ+PC90cj48dHI+PHRkIHN0eWxl PSdwYWRkaW5nOjBjbSAwY20gMGNtIDBjbSc+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFuIHN0eWxl PSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Y29s b3I6IzFGNDk3RCc+PGEgaHJlZj0iaHR0cDovL3d3dy5xdWVvZmxvdy5jb20vIj53d3cucXVlb2Zs b3cuY29tPC9hPjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48L3RkPjx0ZCBzdHlsZT0ncGFkZGluZzow Y20gMGNtIDBjbSAwY20nPjxwIGNsYXNzPU1zb05vcm1hbCBzdHlsZT0nbWFyZ2luLWxlZnQ6My43 NXB0Jz48c3BhbiBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIs InNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPklULUNvbnN1bHRpbmcgdW5kIEluZGl2aWR1YWxz b2Z0d2FyZWVudHdpY2tsdW5nPG86cD48L286cD48L3NwYW4+PC9wPjwvdGQ+PC90cj48L3RhYmxl PjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZh bWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPjxicj5MdWNhIEJlcnRv bmNlbGxvPGJyPkFkbWluaXN0cmF0b3I8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHRhYmxlIGNsYXNz PU1zb05vcm1hbFRhYmxlIGJvcmRlcj0wIGNlbGxzcGFjaW5nPTAgY2VsbHBhZGRpbmc9MD48dHI+ PHRkIHN0eWxlPSdwYWRkaW5nOjBjbSAwY20gMGNtIDBjbSc+PHAgY2xhc3M9TXNvTm9ybWFsPjxz cGFuIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1z ZXJpZiI7Y29sb3I6IzFGNDk3RCc+VGVsZWZvbjo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PC90ZD48 dGQgc3R5bGU9J3BhZGRpbmc6MGNtIDBjbSAwY20gMGNtJz48cCBjbGFzcz1Nc29Ob3JtYWwgc3R5 bGU9J21hcmdpbi1sZWZ0OjMuNzVwdCc+PHNwYW4gc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9u dC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz4rNDkgMzUxIDIx IDMwIDM4IDA8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PC90ZD48L3RyPjx0cj48dGQgc3R5bGU9J3Bh ZGRpbmc6MGNtIDBjbSAwY20gMGNtJz48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gc3R5bGU9J2Zv bnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjoj MUY0OTdEJz5GYXg6PG86cD48L286cD48L3NwYW4+PC9wPjwvdGQ+PHRkIHN0eWxlPSdwYWRkaW5n OjBjbSAwY20gMGNtIDBjbSc+PHAgY2xhc3M9TXNvTm9ybWFsIHN0eWxlPSdtYXJnaW4tbGVmdDoz Ljc1cHQnPjxzcGFuIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJp Iiwic2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3RCc+KzQ5IDM1MSAyMSAzMCAzOCA5OTxvOnA+PC9v OnA+PC9zcGFuPjwvcD48L3RkPjwvdHI+PHRyPjx0ZCBzdHlsZT0ncGFkZGluZzowY20gMGNtIDBj bSAwY20nPjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtm b250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPkUtTWFpbDo8 bzpwPjwvbzpwPjwvc3Bhbj48L3A+PC90ZD48dGQgc3R5bGU9J3BhZGRpbmc6MGNtIDBjbSAwY20g MGNtJz48cCBjbGFzcz1Nc29Ob3JtYWwgc3R5bGU9J21hcmdpbi1sZWZ0OjMuNzVwdCc+PHNwYW4g c3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlm Ijtjb2xvcjojMUY0OTdEJz48YSBocmVmPSJtYWlsdG86bC5iZXJ0b25jZWxsb0BxdWVvLWdyb3Vw LmNvbSI+bC5iZXJ0b25jZWxsb0BxdWVvLWdyb3VwLmNvbTwvYT48bzpwPjwvbzpwPjwvc3Bhbj48 L3A+PC90ZD48L3RyPjwvdGFibGU+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFuIHN0eWxlPSdmb250 LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJpIiwic2Fucy1zZXJpZiI7Y29sb3I6IzFG NDk3RCc+PGJyPnF1ZW8gR21iSDxicj5UaGFyYW5kdGVyIFN0ci4gMTM8YnI+MDExNTkgRHJlc2Rl bjxicj5TaXR6IGRlciBHZXNlbGxzY2hhZnQ6IERyZXNkZW48YnI+SGFuZGVsc3JlZ2lzdGVyZWlu dHJhZzogQW10c2dlcmljaHQgRHJlc2RlbiBIUkIgMjIzNTI8YnI+R2VzY2jDpGZ0c2bDvGhyZXI6 IFLDvGRpZ2VyIEhlbmtlLCBBbmRyw6kgUGlua2VydDxicj5VU3QtSWROci46IERFMjM0MjIwMDc3 PC9zcGFuPjxzcGFuIHN0eWxlPSdmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiJDYWxpYnJp Iiwic2Fucy1zZXJpZiI7Y29sb3I6IzFGNDk3RCc+PG86cD48L286cD48L3NwYW4+PC9wPjxwIGNs YXNzPU1zb05vcm1hbD48c3BhbiBzdHlsZT0nZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseToi Q2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOiMxRjQ5N0QnPjxvOnA+Jm5ic3A7PC9vOnA+PC9z cGFuPjwvcD48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7 Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz48bzpwPiZu YnNwOzwvbzpwPjwvc3Bhbj48L3A+PGRpdiBzdHlsZT0nYm9yZGVyOm5vbmU7Ym9yZGVyLWxlZnQ6 c29saWQgYmx1ZSAxLjVwdDtwYWRkaW5nOjBjbSAwY20gMGNtIDQuMHB0Jz48ZGl2PjxkaXYgc3R5 bGU9J2JvcmRlcjpub25lO2JvcmRlci10b3A6c29saWQgI0I1QzRERiAxLjBwdDtwYWRkaW5nOjMu MHB0IDBjbSAwY20gMGNtJz48cCBjbGFzcz1Nc29Ob3JtYWw+PGI+PHNwYW4gc3R5bGU9J2ZvbnQt c2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6IlRhaG9tYSIsInNhbnMtc2VyaWYiO2NvbG9yOndpbmRv d3RleHQnPkZyb206PC9zcGFuPjwvYj48c3BhbiBzdHlsZT0nZm9udC1zaXplOjEwLjBwdDtmb250 LWZhbWlseToiVGFob21hIiwic2Fucy1zZXJpZiI7Y29sb3I6d2luZG93dGV4dCc+IHVzZXJzLWJv dW5jZXNAb3ZpcnQub3JnIFttYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmddIDxiPk9uIEJl aGFsZiBPZiA8L2I+bWF0dGhldyBsYWdvZTxicj48Yj5TZW50OjwvYj4gVGh1cnNkYXksIFNlcHRl bWJlciAxNywgMjAxNSA5OjU2IFBNPGJyPjxiPlRvOjwvYj4gJ0FsZXggQ3Jvdyc7ICdZYW5pdiBL YXVsJzxicj48Yj5DYzo8L2I+IHVzZXJzQG92aXJ0Lm9yZzxicj48Yj5TdWJqZWN0OjwvYj4gUmU6 IFtvdmlydC11c2Vyc10gQXV0b21hdGljYWxseSBtaWdyYXRlIFZNIGJldHdlZW4gaG9zdHMgaW4g dGhlIHNhbWUgY2x1c3RlcjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48L2Rpdj48L2Rpdj48cCBjbGFz cz1Nc29Ob3JtYWw+PG86cD4mbmJzcDs8L286cD48L3A+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFu IGxhbmc9RU4tVVMgc3R5bGU9J2ZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmki LCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0OTdEJz5UaGVyZSBhcmUgUERV4oCZcyB0aGF0IHlvdSBj YW4gbW9uaXRvciBwb3dlciBkcmF3IHBlciBwb3J0IGFuZCB0aGF0IHdvdWxkIGtpbmQgb2YgdGVs bCB5b3UgaWYgYSBQU1UgZmFpbGVkIGFzIHRoZSBsb2FkIHdvdWxkIGJlIDA8bzpwPjwvbzpwPjwv c3Bhbj48L3A+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFuIGxhbmc9RU4tVVMgc3R5bGU9J2ZvbnQt c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjojMUY0 OTdEJz48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PGRpdj48ZGl2IHN0eWxlPSdib3JkZXI6 bm9uZTtib3JkZXItdG9wOnNvbGlkICNFMUUxRTEgMS4wcHQ7cGFkZGluZzozLjBwdCAwY20gMGNt IDBjbSc+PHAgY2xhc3M9TXNvTm9ybWFsPjxiPjxzcGFuIGxhbmc9RU4tVVMgc3R5bGU9J2ZvbnQt c2l6ZToxMS4wcHQ7Zm9udC1mYW1pbHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjtjb2xvcjp3aW5k b3d0ZXh0Jz5Gcm9tOjwvc3Bhbj48L2I+PHNwYW4gbGFuZz1FTi1VUyBzdHlsZT0nZm9udC1zaXpl OjExLjBwdDtmb250LWZhbWlseToiQ2FsaWJyaSIsInNhbnMtc2VyaWYiO2NvbG9yOndpbmRvd3Rl eHQnPiA8YSBocmVmPSJtYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmciPnVzZXJzLWJvdW5j ZXNAb3ZpcnQub3JnPC9hPiBbPGEgaHJlZj0ibWFpbHRvOnVzZXJzLWJvdW5jZXNAb3ZpcnQub3Jn Ij5tYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmc8L2E+XSA8Yj5PbiBCZWhhbGYgT2YgPC9i PkFsZXggQ3Jvdzxicj48Yj5TZW50OjwvYj4gVGh1cnNkYXksIFNlcHRlbWJlciAxNywgMjAxNSAx MjozMSBQTTxicj48Yj5Ubzo8L2I+IFlhbml2IEthdWwgJmx0OzxhIGhyZWY9Im1haWx0bzp5a2F1 bEByZWRoYXQuY29tIj55a2F1bEByZWRoYXQuY29tPC9hPiZndDs8YnI+PGI+Q2M6PC9iPiA8YSBo cmVmPSJtYWlsdG86dXNlcnNAb3ZpcnQub3JnIj51c2Vyc0BvdmlydC5vcmc8L2E+PGJyPjxiPlN1 YmplY3Q6PC9iPiBSZTogW292aXJ0LXVzZXJzXSBBdXRvbWF0aWNhbGx5IG1pZ3JhdGUgVk0gYmV0 d2VlbiBob3N0cyBpbiB0aGUgc2FtZSBjbHVzdGVyPG86cD48L286cD48L3NwYW4+PC9wPjwvZGl2 PjwvZGl2PjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPjxvOnA+Jm5ic3A7PC9v OnA+PC9zcGFuPjwvcD48cCBjbGFzcz1Nc29Ob3JtYWwgc3R5bGU9J21hcmdpbi1ib3R0b206MTIu MHB0Jz48c3BhbiBsYW5nPUVOLVVTPkkgZG9uJ3QgcmVhbGx5IHRoaW5rIHRoaXMgaXMgcHJhY3Rp Y2FsOjxicj48YnI+PG86cD48L286cD48L3NwYW4+PC9wPjxibG9ja3F1b3RlIHN0eWxlPSdtYXJn aW4tdG9wOjUuMHB0O21hcmdpbi1ib3R0b206NS4wcHQnPjxkaXY+PGRpdj48ZGl2PjxkaXY+PHAg Y2xhc3M9TXNvTm9ybWFsPjxzcGFuIGxhbmc9RU4tVVM+LSBJZiB0aGUgUFNVIGZhaWxlZCwgeW91 ciBVUFMgY291bGQgYWxlcnQgeW91LiBJZiB5b3UgaGF2ZSBvbmUuLi48bzpwPjwvbzpwPjwvc3Bh bj48L3A+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxwIGNsYXNzPU1zb05v cm1hbCBzdHlsZT0nbWFyZ2luLWJvdHRvbToxMi4wcHQnPjxzcGFuIGxhbmc9RU4tVVM+PGJyPklm IHlvdSBoYXZlIG9ubHkgb25lIFBTVSBpbiBhIGhvc3QsIGEgVVBTIGlzIG5vdCBnb2luZyB0byBz dG9wIHlvdSBsb3NpbmcgYWxsIHRoZSBWTXMgb24gdGhhdCBob3N0LiBPSywgaWYgeW91IGhhZCBO KzEgUFNVcywgeW91IG1heSBiZSBhYmxlIHRvIG1vbml0b3IgZm9yIHRoaXMgKElQTUkvTE9NL0RS QUMgZXRjKWFuZCB1c2UgdGhlIEFQSSB0byBwdXQgYSBob3N0IGludG8gbWFpbnRlbmFuY2UuIEFs c28gYSBsb3Qgb2YgcGVvcGxlIHJlbHkgb24gbG93LWNvc3Qgd2hpdGUtYm94IHNlcnZlcnMgYW5k IGRlY2lkZSB0aGF0IGl0J3MgT0sgaWYgYSBzaW5nbGUgUFNVIGluIGEgaG9zdCBkaWVzLCBhcywg d2VsbCwgd2UgaGF2ZSBIQSB0byBzdGFydCBvbiBvdGhlciBob3N0cy4gSWYgdGhleSBoYXZlIE4r MSBQU1VzIGluIHRoZSBob3N0cyBkbyB0aGV5IHJlYWxseSBoYXZlIHRvIG1pZ3JhdGUgZXZlcnl0 aGluZyBvZmY/IFN3aW5ncyBhbmQgcm91bmRhYm91dHMgcmVhbGx5Ljxicj48YnI+SSdtIGFsc28g bm90IHN1cmUgSSd2ZSBzZWVuIGFueSBwcmFjdGljYWwgREMgc2V0dXBzIHdoZXJlIGEgVVBTIGNh biBtb25pdG9yIHRoZSBsb2FkIGZvciBldmVyeSBzaW5nbGUgYXR0YWNoZWQgcGh5c2ljYWwgbWFj aGluZSBhbmQgZmlndXJlIG91dCB0aGF0IG9uZSBvZiB0aGUgcmVkdW5kYW50IFBTVXMgaW4gaXQg aGFzIGZhaWxlZCAtIEknZCBsb3ZlIHRvIGtub3cgaWYgdGhlcmUgYXJlIGFzIHRoYXQgd291bGQg YmUgcmVhbGx5IGNvb2wuPGJyPjxicj48bzpwPjwvbzpwPjwvc3Bhbj48L3A+PGJsb2NrcXVvdGUg c3R5bGU9J21hcmdpbi10b3A6NS4wcHQ7bWFyZ2luLWJvdHRvbTo1LjBwdCc+PGRpdj48ZGl2Pjxk aXY+PGRpdj48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz4tIElmIHRoZSBtYWNo aW5lIGlzIGdvaW5nIGRvd24gaW4gYW4gb3JkaW5hcnkgZmxvdywgc3VyZWx5IGl0IGNhbiBiZSBk b25lLiA8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9j a3F1b3RlPjxwIGNsYXNzPU1zb05vcm1hbCBzdHlsZT0nbWFyZ2luLWJvdHRvbToxMi4wcHQnPjxz cGFuIGxhbmc9RU4tVVM+PGJyPklzbid0IHRoYXQgd2hhdCAmcXVvdDtNYWludGVuYW5jZSBtb2Rl JnF1b3Q7IGlzIGZvcj88YnI+PGJyPjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48YmxvY2txdW90ZSBz dHlsZT0nbWFyZ2luLXRvcDo1LjBwdDttYXJnaW4tYm90dG9tOjUuMHB0Jz48ZGl2PjxkaXY+PGRp dj48ZGl2PjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPsOCJm5ic3A7PG86cD48 L286cD48L3NwYW4+PC9wPjwvZGl2PjxibG9ja3F1b3RlIHN0eWxlPSdib3JkZXI6bm9uZTtib3Jk ZXItbGVmdDpzb2xpZCAjQ0NDQ0NDIDEuMHB0O3BhZGRpbmc6MGNtIDBjbSAwY20gNi4wcHQ7bWFy Z2luLWxlZnQ6NC44cHQ7bWFyZ2luLXRvcDo1LjBwdDttYXJnaW4tcmlnaHQ6MGNtO21hcmdpbi1i b3R0b206NS4wcHQnPjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPjxicj5FdmVu IGlmIGl0IHdhcyBhIG5ldHdvcmsgZmFpbHVyZSBhbmQgdGhlIGhvc3Qgd2FzIHN0aWxsIHVwLCBo b3cgd291bGQgeW91IGxpdmUgbWlncmF0ZSBhIFZNIGZyb20gYSBob3N0IHlvdSBjYW4ndCBldmVu IHRhbGsgdG8/PG86cD48L286cD48L3NwYW4+PC9wPjwvYmxvY2txdW90ZT48ZGl2PjxwIGNsYXNz PU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD48 L2Rpdj48ZGl2PjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPkl0IGNvdWxkIGJl IHN1c3BlbmRlZCB0byBkaXNrIChsb2NhbCkgLSBpZiB0aGUgZGlzayBpcyBhdmFpbGFibGUuPG86 cD48L286cD48L3NwYW4+PC9wPjwvZGl2PjxkaXY+PHAgY2xhc3M9TXNvTm9ybWFsPjxzcGFuIGxh bmc9RU4tVVM+VGhlbiB0aGUgZGVjaXNpb24gaWYgaXQgaXMgdG8gYmUgcmVzdW1lZCBmcm9tIGxv Y2FsIGRpc2sgb3Igbm90IChhcyBpdCBtaWdodCBiZSBIQSdlZCBhbmQgaXMgcnVubmluZyBlbHNl d2hlcmUpIG5lZWQgdG8gYmUgdGFrZW4gbGF0ZXIsIG9mIGNvdXJzZS48bzpwPjwvbzpwPjwvc3Bh bj48L3A+PC9kaXY+PC9kaXY+PC9kaXY+PC9kaXY+PC9ibG9ja3F1b3RlPjxwIGNsYXNzPU1zb05v cm1hbCBzdHlsZT0nbWFyZ2luLWJvdHRvbToxMi4wcHQnPjxzcGFuIGxhbmc9RU4tVVM+PGJyPlll cywgYnV0IHRoYXQncyBub3QgZXZlbiByZW1vdGVseSBwb3NzaWJsZSB3aXRoIE92aXJ0IHJpZ2h0 IG5vdy4gSSB3YXMgdHJ5aW5nIHRvIGJlIHByYWN0aWNhbCBhcyB0aGUgT1AgaGFzIG9ubHkganVz dCBzdGFydGVkIHVzaW5nIE92aXJ0IGFuZCBJIHRoaW5rIGl0IG1pZ2h0IGJlIGEgYml0IG11Y2gg dG8gYXNrIGhpbSB0byBzdGFydCBjb2RpbmcgdXAgd2hhdCBoZSdkIGxpa2UuPGJyPjxicj48bzpw PjwvbzpwPjwvc3Bhbj48L3A+PGJsb2NrcXVvdGUgc3R5bGU9J21hcmdpbi10b3A6NS4wcHQ7bWFy Z2luLWJvdHRvbTo1LjBwdCc+PGRpdj48ZGl2PjxkaXY+PGRpdj48cCBjbGFzcz1Nc29Ob3JtYWw+ PHNwYW4gbGFuZz1FTi1VUz48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PC9kaXY+PGRpdj48 cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz7DgiZuYnNwOzxvOnA+PC9vOnA+PC9z cGFuPjwvcD48L2Rpdj48YmxvY2txdW90ZSBzdHlsZT0nYm9yZGVyOm5vbmU7Ym9yZGVyLWxlZnQ6 c29saWQgI0NDQ0NDQyAxLjBwdDtwYWRkaW5nOjBjbSAwY20gMGNtIDYuMHB0O21hcmdpbi1sZWZ0 OjQuOHB0O21hcmdpbi10b3A6NS4wcHQ7bWFyZ2luLXJpZ2h0OjBjbTttYXJnaW4tYm90dG9tOjUu MHB0Jz48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz48YnI+VGhlIG9ubHkgd2F5 IHlvdSBjb3VsZCBkbyBpdCB3YXMgaWYgeW91IHNvbWVob3cgbWFnaWNhbGx5IGtuZXcgZmFyIGVu b3VnaCBpbiBhZHZhbmNlIHRoYXQgdGhlIGhvc3Qgd2FzIGFib3V0IHRvIGZhaWwgKCEpIGFuZCB0 aGF0IGdhdmUgZW5vdWdoIHRpbWUgdG8gbWlncmF0ZSB0aGUgbWFjaGluZXMgb2ZmLiBCdXQgaG93 IHdvdWxkIHlvdSBldmVyIGtub3cgdGhhdCAmcXVvdDttYWNoaW5lIDxhIGhyZWY9Imh0dHA6Ly9x dXV4LmJhci5uZXQiIHRhcmdldD0iX2JsYW5rIj5xdXV4LmJhci5uZXQ8L2E+IGlzIGdvaW5nIHRv IGZhaWwgaW4gNyBtaW51dGVzJnF1b3Q7PzxvOnA+PC9vOnA+PC9zcGFuPjwvcD48L2Jsb2NrcXVv dGU+PGRpdj48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz48bzpwPiZuYnNwOzwv bzpwPjwvc3Bhbj48L3A+PC9kaXY+PGRpdj48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1F Ti1VUz5JIGNvbXBsZXRlbHkgYWdyZWUgdGhlcmUgYXJlIHNpdHVhdGlvbnMgaW4gd2hpY2ggeW91 IGNhbid0IGZvcmVzZWUgdGhlIGZhaWx1cmUuw4ImbmJzcDs8bzpwPjwvbzpwPjwvc3Bhbj48L3A+ PC9kaXY+PGRpdj48cCBjbGFzcz1Nc29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz5CdXQgaW4gbWFu eSwgeW91IGNhbi4gSW4gdGhvc2UgY2FzZXMsIGl0IG1ha2VzIHNlbnNlIGZvciB0aGUgaG9zdCB0 byBzZWxmLWluaXRpYXRlICdtb3ZlIHRvIG1haW50ZW5hbmNlJyBtb2RlLiBUaGUgcG9saWN5IG9m IHdoYXQgdG8gZG8gd2hlbiAnc2VsZi1tb3ZpbmctdG8tbWFpbnRlbmFuY2UtbW9kZScgY291bGQg YmUgcHJlLWZldGNoZWQgZnJvbSB0aGUgZW5naW5lLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48L2Rp dj48ZGl2PjxwIGNsYXNzPU1zb05vcm1hbD48c3BhbiBsYW5nPUVOLVVTPlkuPG86cD48L286cD48 L3NwYW4+PC9wPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2PjwvYmxvY2txdW90ZT48cCBjbGFzcz1N c29Ob3JtYWw+PHNwYW4gbGFuZz1FTi1VUz48YnI+SG1tLCBJIHdvdWxkIGxvdmUgdGhhdCB0byBi ZSB0cnVlLiBCdXQgSSd2ZSBzZWVuIHNvIG1hbnkgc28gY2FsbGVkICZxdW90O2Nvcm5lci1jYXNl cyZxdW90OyB0aGF0IEkgbm93IHRoaW5rIHRoZSBmYWlsdXJlIGFyZWEgaW4gYSBkYXRhY2VudGVy IGlzIGEgZnJhY3RhbCB3aXRoIGluZmluaXRlIGNvcm5lcnMuIFllcywgeW91IGNvdWxkIG1vbml0 b3IgU01BUlQgb24gbG9jYWwgZHJpdmVzLCBwaWNrIHVwIHVuY29ycmVjdGVkIEVDQyBlcnJvcnMs IHVzZSAmcXVvdDtzZW5zb3JzJnF1b3Q7IHRvIGNoZWNrIGZvciBzYWdnaW5nIHZvbHRhZ2VzIG9y IGhpZ2ggdGVtcHMsIGJ1dCBJIGRvbid0IHRoaW5rIHlvdSBjYW4gZXZlciBob3BlIHRvIGNhdGNo IGV2ZXJ5dGhpbmcsIGFuZCB5b3UgY291bGQgZW5kIHVwIGRvaW5nIGEgbWlncmF0aW9uICZxdW90 O3N0b3JtJnF1b3Q7IGZvciAuIEkndmUgaGFkIG1vcmUgdGhhbiBlbm91Z2ggb2YgJnF1b3Q7RW50 ZXJwcmlzZSBTcGVjJnF1b3Q7IHN3aXRjaGVzIHN1ZGRlbmx5IGdvaW5nIG51dHMgYW5kIHNwYW1t aW5nIGNvcnJ1cHQgTUFDcyBhbGwgb3ZlciB0aGUgTEFOIHRvIGtub3cgeW91IGNhbid0IGV2ZXIg YWNjb3VudCBmb3IgZXZlcnl0aGluZy48YnI+PGJyPkkgdGhpbmsgaXQncyBiZXR0ZXIgdG8gYWRv cHQgdGhlIG1vZGVsIG9mIHJlZHVuZGFuY3kgaW4gc29mdHdhcmUgYW5kIHNlcnZpY2VzLCBzbyBu by1vbmUgZXZlbiBub3RpY2VzIGlmIGEgVk0gaG9zdCBnb2VzIGF3YXksIHRoZXJlJ3MgYWx3YXlz IHNvbWV0aGluZyBlbHNlIHRvIHRha2UgdXAgdGhlIHNsYWNrLiBKdXN0IGxpa2UgdGhlIG9yaWdp bnMgb2YgdGhlIEludGVybmV0IC0gdGhlIG5ldHdvcmsgc2hvdWxkIGJlIGR1bWIgYW5kIHRoZSBh cHBsaWNhdGlvbnMgc2hvdWxkIGNvcGUgd2l0aCBpdCEgQW55IGluZnJhc3RydWN0dXJlIHRoYXQg Y2FuJ3QgY29wZSB3aXRoIHRoZSBsb3NzIG9mIGEgZmV3IFZNcyBmb3IgYSBmZXcgbWludXRlcyBw cm9iYWJseSBuZWVkcyBhIHJlZnJlc2guPGJyPjxicj5DaGVlcnM8YnI+PGJyPkFsZXg8YnI+PGJy Pjxicj48YnI+PGJyPjxicj4uIDxvOnA+PC9vOnA+PC9zcGFuPjwvcD48L2Rpdj48L2Rpdj48L2Jv ZHk+PC9odG1sPg== --_000_DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13Csphinxqueoloc_--

This is a multi-part message in MIME format. --------------020703040309000603030606 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit On 18/09/15 07:30, Luca Bertoncello wrote:
Hi all,
thank you very much for your answers.
So:
1)Of course, we have UPS. More than one, in our server room, and of course they will send an advice to the host if they are on battery
Good.
2)My question was: what can I do, so that in case of Kernel Panic or similar, the VM will be migrated (live or not) to another host?
You would make the VMs HA and acquire a fencing solution.
3)Id like to have a shutdown-script on the host that put the host in Maintenance and wait until its done, so that I can just shutdown or reboot it without any other action. Is it possible? It would help to manage the power failure, too, assuming that other hosts have better UPS (it can be possible )
You could probably use the REST API on the Ovirt Engine for that.But it might be better to have a highly available machine (VM or not) running something like Nagios or Icinga which would perform the monitoring of your hosts and connect to the REST API to perform maintenance and shutdown. You might also consider a UPS service like NUT (unless you're already doing it). Cheers Alex
Thanks a lot
Mit freundlichen Grüßen
Luca Bertoncello
-- Besuchen Sie unsere Webauftritte:
www.queo.biz <http://www.queo.biz/>
Agentur für Markenführung und Kommunikation
www.queoflow.com <http://www.queoflow.com/>
IT-Consulting und Individualsoftwareentwicklung
Luca Bertoncello Administrator
Telefon:
+49 351 21 30 38 0
Fax:
+49 351 21 30 38 99
E-Mail:
l.bertoncello@queo-group.com <mailto:l.bertoncello@queo-group.com>
queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077
*From:*users-bounces@ovirt.org [mailto:users-bounces@ovirt.org] *On Behalf Of *matthew lagoe *Sent:* Thursday, September 17, 2015 9:56 PM *To:* 'Alex Crow'; 'Yaniv Kaul' *Cc:* users@ovirt.org *Subject:* Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster
There are PDUs that you can monitor power draw per port and that would kind of tell you if a PSU failed as the load would be 0
*From:*users-bounces@ovirt.org <mailto:users-bounces@ovirt.org> [mailto:users-bounces@ovirt.org] *On Behalf Of *Alex Crow *Sent:* Thursday, September 17, 2015 12:31 PM *To:* Yaniv Kaul <ykaul@redhat.com <mailto:ykaul@redhat.com>> *Cc:* users@ovirt.org <mailto:users@ovirt.org> *Subject:* Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster
I don't really think this is practical:
- If the PSU failed, your UPS could alert you. If you have one...
If you have only one PSU in a host, a UPS is not going to stop you losing all the VMs on that host. OK, if you had N+1 PSUs, you may be able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a host into maintenance. Also a lot of people rely on low-cost white-box servers and decide that it's OK if a single PSU in a host dies, as, well, we have HA to start on other hosts. If they have N+1 PSUs in the hosts do they really have to migrate everything off? Swings and roundabouts really.
I'm also not sure I've seen any practical DC setups where a UPS can monitor the load for every single attached physical machine and figure out that one of the redundant PSUs in it has failed - I'd love to know if there are as that would be really cool.
- If the machine is going down in an ordinary flow, surely it can be done.
Isn't that what "Maintenance mode" is for?
Â
Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to?
It could be suspended to disk (local) - if the disk is available.
Then the decision if it is to be resumed from local disk or not (as it might be HA'ed and is running elsewhere) need to be taken later, of course.
Yes, but that's not even remotely possible with Ovirt right now. I was trying to be practical as the OP has only just started using Ovirt and I think it might be a bit much to ask him to start coding up what he'd like.
Â
The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine quux.bar.net <http://quux.bar.net> is going to fail in 7 minutes"?
I completely agree there are situations in which you can't foresee the failure.Â
But in many, you can. In those cases, it makes sense for the host to self-initiate 'move to maintenance' mode. The policy of what to do when 'self-moving-to-maintenance-mode' could be pre-fetched from the engine.
Y.
Hmm, I would love that to be true. But I've seen so many so called "corner-cases" that I now think the failure area in a datacenter is a fractal with infinite corners. Yes, you could monitor SMART on local drives, pick up uncorrected ECC errors, use "sensors" to check for sagging voltages or high temps, but I don't think you can ever hope to catch everything, and you could end up doing a migration "storm" for . I've had more than enough of "Enterprise Spec" switches suddenly going nuts and spamming corrupt MACs all over the LAN to know you can't ever account for everything.
I think it's better to adopt the model of redundancy in software and services, so no-one even notices if a VM host goes away, there's always something else to take up the slack. Just like the origins of the Internet - the network should be dumb and the applications should cope with it! Any infrastructure that can't cope with the loss of a few VMs for a few minutes probably needs a refresh.
Cheers
Alex
.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
. --------------020703040309000603030606 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> <br> <br> <div class="moz-cite-prefix">On 18/09/15 07:30, Luca Bertoncello wrote:<br> </div> <blockquote cite="mid:DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13C@sphinx.queo.local" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=windows-1252"> <meta name="Generator" content="Microsoft Word 14 (filtered medium)"> <style><!-- /* Font Definitions */ @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} @font-face {font-family:Tahoma; panose-1:2 11 6 4 3 5 4 4 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif"; color:black;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.MsoAcetate, li.MsoAcetate, div.MsoAcetate {mso-style-priority:99; mso-style-link:"Sprechblasentext Zchn"; margin:0cm; margin-bottom:.0001pt; font-size:8.0pt; font-family:"Tahoma","sans-serif"; color:black;} p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph {mso-style-priority:34; margin-top:0cm; margin-right:0cm; margin-bottom:0cm; margin-left:36.0pt; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman","serif"; color:black;} span.E-MailFormatvorlage17 {mso-style-type:personal; font-family:"Calibri","sans-serif"; color:#1F497D;} span.E-MailFormatvorlage18 {mso-style-type:personal-reply; font-family:"Calibri","sans-serif"; color:#1F497D;} span.SprechblasentextZchn {mso-style-name:"Sprechblasentext Zchn"; mso-style-priority:99; mso-style-link:Sprechblasentext; font-family:"Tahoma","sans-serif"; color:black;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @page WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} /* List Definitions */ @list l0 {mso-list-id:1542523018; mso-list-type:hybrid; mso-list-template-ids:-293582552 67567633 67567641 67567643 67567631 67567641 67567643 67567631 67567641 67567643;} @list l0:level1 {mso-level-text:"%1\)"; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level2 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level3 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @list l0:level4 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level5 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level6 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} @list l0:level7 {mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level8 {mso-level-number-format:alpha-lower; mso-level-tab-stop:none; mso-level-number-position:left; text-indent:-18.0pt;} @list l0:level9 {mso-level-number-format:roman-lower; mso-level-tab-stop:none; mso-level-number-position:right; text-indent:-9.0pt;} ol {margin-bottom:0cm;} ul {margin-bottom:0cm;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext="edit" spidmax="1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext="edit"> <o:idmap v:ext="edit" data="1" /> </o:shapelayout></xml><![endif]--> <div class="WordSection1"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Hi all,<o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">thank you very much for your answers.<o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">So:<o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p> </o:p></span></p> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><!--[if !supportLists]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><span style="mso-list:Ignore">1)<span style="font:7.0pt "Times New Roman""> </span></span></span><!--[endif]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">Of course, we have UPS. More than one, in our server room, and of course they will send an advice to the host if they are on battery</span></p> </div> </blockquote> <br> Good.<br> <br> <blockquote cite="mid:DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13C@sphinx.queo.local" type="cite"> <div class="WordSection1"> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p></o:p></span></p> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><!--[if !supportLists]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><span style="mso-list:Ignore">2)<span style="font:7.0pt "Times New Roman""> </span></span></span><!--[endif]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">My question was: what can I do, so that in case of Kernel Panic or similar, the VM will be migrated (live or not) to another host?</span></p> </div> </blockquote> <br> You would make the VMs HA and acquire a fencing solution.<br> <br> <blockquote cite="mid:DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13C@sphinx.queo.local" type="cite"> <div class="WordSection1"> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p></o:p></span></p> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><!--[if !supportLists]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><span style="mso-list:Ignore">3)<span style="font:7.0pt "Times New Roman""> </span></span></span><!--[endif]--><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">Id like to have a shutdown-script on the host that put the host in Maintenance and wait until its done, so that I can just shutdown or reboot it without any other action. Is it possible? It would help to manage the power failure, too, assuming that other hosts have better UPS (it can be possible )</span></p> </div> </blockquote> <br> You could probably use the REST API on the Ovirt Engine for that.But it might be better to have a highly available machine (VM or not) running something like Nagios or Icinga which would perform the monitoring of your hosts and connect to the REST API to perform maintenance and shutdown. You might also consider a UPS service like NUT (unless you're already doing it).<br> <br> Cheers<br> <br> Alex<br> <br> <blockquote cite="mid:DC8168E813106D47ADB2BA2C8A46E9FF327AB9C13C@sphinx.queo.local" type="cite"> <div class="WordSection1"> <p class="MsoListParagraph" style="text-indent:-18.0pt;mso-list:l0 level1 lfo1"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">Thanks a lot<o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p> <p class="MsoNormal" style="margin-bottom:12.0pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Mit freundlichen Grüßen<br> <br> Luca Bertoncello<br> <br> -- <br> Besuchen Sie unsere Webauftritte:<o:p></o:p></span></p> <table class="MsoNormalTable" border="0" cellpadding="0" cellspacing="0"> <tbody> <tr> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><a moz-do-not-send="true" href="http://www.queo.biz/"><a class="moz-txt-link-abbreviated" href="http://www.queo.biz">www.queo.biz</a></a><o:p></o:p></span></p> </td> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal" style="margin-left:3.75pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Agentur für Markenführung und Kommunikation<o:p></o:p></span></p> </td> </tr> <tr> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><a moz-do-not-send="true" href="http://www.queoflow.com/"><a class="moz-txt-link-abbreviated" href="http://www.queoflow.com">www.queoflow.com</a></a><o:p></o:p></span></p> </td> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal" style="margin-left:3.75pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">IT-Consulting und Individualsoftwareentwicklung<o:p></o:p></span></p> </td> </tr> </tbody> </table> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><br> Luca Bertoncello<br> Administrator<o:p></o:p></span></p> <table class="MsoNormalTable" border="0" cellpadding="0" cellspacing="0"> <tbody> <tr> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Telefon:<o:p></o:p></span></p> </td> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal" style="margin-left:3.75pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">+49 351 21 30 38 0<o:p></o:p></span></p> </td> </tr> <tr> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Fax:<o:p></o:p></span></p> </td> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal" style="margin-left:3.75pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">+49 351 21 30 38 99<o:p></o:p></span></p> </td> </tr> <tr> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">E-Mail:<o:p></o:p></span></p> </td> <td style="padding:0cm 0cm 0cm 0cm"> <p class="MsoNormal" style="margin-left:3.75pt"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><a moz-do-not-send="true" href="mailto:l.bertoncello@queo-group.com"><a class="moz-txt-link-abbreviated" href="mailto:l.bertoncello@queo-group.com">l.bertoncello@queo-group.com</a></a><o:p></o:p></span></p> </td> </tr> </tbody> </table> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><br> queo GmbH<br> Tharandter Str. 13<br> 01159 Dresden<br> Sitz der Gesellschaft: Dresden<br> Handelsregistereintrag: Amtsgericht Dresden HRB 22352<br> Geschäftsführer: Rüdiger Henke, André Pinkert<br> USt-IdNr.: DE234220077</span><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p> <div style="border:none;border-left:solid blue 1.5pt;padding:0cm 0cm 0cm 4.0pt"> <div> <div style="border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0cm 0cm 0cm"> <p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext"> <a class="moz-txt-link-abbreviated" href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> [<a class="moz-txt-link-freetext" href="mailto:users-bounces@ovirt.org">mailto:users-bounces@ovirt.org</a>] <b>On Behalf Of </b>matthew lagoe<br> <b>Sent:</b> Thursday, September 17, 2015 9:56 PM<br> <b>To:</b> 'Alex Crow'; 'Yaniv Kaul'<br> <b>Cc:</b> <a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Subject:</b> Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster<o:p></o:p></span></p> </div> </div> <p class="MsoNormal"><o:p> </o:p></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US">There are PDUs that you can monitor power draw per port and that would kind of tell you if a PSU failed as the load would be 0<o:p></o:p></span></p> <p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D" lang="EN-US"><o:p> </o:p></span></p> <div> <div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm"> <p class="MsoNormal"><b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:windowtext" lang="EN-US">From:</span></b><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:windowtext" lang="EN-US"> <a moz-do-not-send="true" href="mailto:users-bounces@ovirt.org">users-bounces@ovirt.org</a> [<a moz-do-not-send="true" href="mailto:users-bounces@ovirt.org">mailto:users-bounces@ovirt.org</a>] <b>On Behalf Of </b>Alex Crow<br> <b>Sent:</b> Thursday, September 17, 2015 12:31 PM<br> <b>To:</b> Yaniv Kaul <<a moz-do-not-send="true" href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>><br> <b>Cc:</b> <a moz-do-not-send="true" href="mailto:users@ovirt.org">users@ovirt.org</a><br> <b>Subject:</b> Re: [ovirt-users] Automatically migrate VM between hosts in the same cluster<o:p></o:p></span></p> </div> </div> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-US">I don't really think this is practical:<br> <br> <o:p></o:p></span></p> <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <div> <div> <p class="MsoNormal"><span lang="EN-US">- If the PSU failed, your UPS could alert you. If you have one...<o:p></o:p></span></p> </div> </div> </div> </div> </blockquote> <p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-US"><br> If you have only one PSU in a host, a UPS is not going to stop you losing all the VMs on that host. OK, if you had N+1 PSUs, you may be able to monitor for this (IPMI/LOM/DRAC etc)and use the API to put a host into maintenance. Also a lot of people rely on low-cost white-box servers and decide that it's OK if a single PSU in a host dies, as, well, we have HA to start on other hosts. If they have N+1 PSUs in the hosts do they really have to migrate everything off? Swings and roundabouts really.<br> <br> I'm also not sure I've seen any practical DC setups where a UPS can monitor the load for every single attached physical machine and figure out that one of the redundant PSUs in it has failed - I'd love to know if there are as that would be really cool.<br> <br> <o:p></o:p></span></p> <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <div> <div> <p class="MsoNormal"><span lang="EN-US">- If the machine is going down in an ordinary flow, surely it can be done. <o:p></o:p></span></p> </div> </div> </div> </div> </blockquote> <p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-US"><br> Isn't that what "Maintenance mode" is for?<br> <br> <o:p></o:p></span></p> <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <div> <div> <p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p> </div> <blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"> <p class="MsoNormal"><span lang="EN-US"><br> Even if it was a network failure and the host was still up, how would you live migrate a VM from a host you can't even talk to?<o:p></o:p></span></p> </blockquote> <div> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US">It could be suspended to disk (local) - if the disk is available.<o:p></o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US">Then the decision if it is to be resumed from local disk or not (as it might be HA'ed and is running elsewhere) need to be taken later, of course.<o:p></o:p></span></p> </div> </div> </div> </div> </blockquote> <p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-US"><br> Yes, but that's not even remotely possible with Ovirt right now. I was trying to be practical as the OP has only just started using Ovirt and I think it might be a bit much to ask him to start coding up what he'd like.<br> <br> <o:p></o:p></span></p> <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt"> <div> <div> <div> <div> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p> </div> <blockquote style="border:none;border-left:solid #CCCCCC 1.0pt;padding:0cm 0cm 0cm 6.0pt;margin-left:4.8pt;margin-top:5.0pt;margin-right:0cm;margin-bottom:5.0pt"> <p class="MsoNormal"><span lang="EN-US"><br> The only way you could do it was if you somehow magically knew far enough in advance that the host was about to fail (!) and that gave enough time to migrate the machines off. But how would you ever know that "machine <a moz-do-not-send="true" href="http://quux.bar.net" target="_blank">quux.bar.net</a> is going to fail in 7 minutes"?<o:p></o:p></span></p> </blockquote> <div> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US">I completely agree there are situations in which you can't foresee the failure. <o:p></o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US">But in many, you can. In those cases, it makes sense for the host to self-initiate 'move to maintenance' mode. The policy of what to do when 'self-moving-to-maintenance-mode' could be pre-fetched from the engine.<o:p></o:p></span></p> </div> <div> <p class="MsoNormal"><span lang="EN-US">Y.<o:p></o:p></span></p> </div> </div> </div> </div> </blockquote> <p class="MsoNormal"><span lang="EN-US"><br> Hmm, I would love that to be true. But I've seen so many so called "corner-cases" that I now think the failure area in a datacenter is a fractal with infinite corners. Yes, you could monitor SMART on local drives, pick up uncorrected ECC errors, use "sensors" to check for sagging voltages or high temps, but I don't think you can ever hope to catch everything, and you could end up doing a migration "storm" for . I've had more than enough of "Enterprise Spec" switches suddenly going nuts and spamming corrupt MACs all over the LAN to know you can't ever account for everything.<br> <br> I think it's better to adopt the model of redundancy in software and services, so no-one even notices if a VM host goes away, there's always something else to take up the slack. Just like the origins of the Internet - the network should be dumb and the applications should cope with it! Any infrastructure that can't cope with the loss of a few VMs for a few minutes probably needs a refresh.<br> <br> Cheers<br> <br> Alex<br> <br> <br> <br> <br> <br> . <o:p></o:p></span></p> </div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <br> . <br> </body> </html> --------------020703040309000603030606--

Hi Alex
2) My question was: "what can I do, so that in case of Kernel Panic or similar, the VM will be migrated (live or not) to another host?"
You would make the VMs HA and acquire a fencing solution.
What do you mean now? Have two VM and build a cluster? This is not what we want... If it's possible, I'd like to have more Host AS CLUSTER with migration of the VM between the nodes... I think oVirt already do that, since I have to create Clusters. For me a Cluster is not just "more nodes with the same CPU", but also something with load balancing or high availability...
3) I'd like to have a shutdown-script on the host that put the host in Maintenance and wait until it's done, so that I can just shutdown or reboot it without any other action. Is it possible? It would help to manage the power failure, too, assuming that other hosts have better UPS (it can be possible.)
You could probably use the REST API on the Ovirt Engine for that.But it might be better to have a highly available machine (VM or not) running something like Nagios or Icinga which would perform the monitoring of your hosts and connect to the REST API to perform maintenance and shutdown. You might also consider a UPS service like NUT (unless you're already doing it).
Well, I already use NUT and we have Icinga monitoring the hosts. But I can't understand what you mean and (more important!) how can I do it... I checked the REST API and the CLI. I can write a little script to put an host in Maintenance or active it again, that's not the problem. The problem is to have somewhat starting this script automatically... Any suggestion? Icinga will not be the solution, since it checks the host every 5 minutes or so, but we need to have this script started in seconds... Thanks Mit freundlichen Grüßen Luca Bertoncello -- Besuchen Sie unsere Webauftritte: www.queo.biz Agentur für Markenführung und Kommunikation www.queoflow.com IT-Consulting und Individualsoftwareentwicklung Luca Bertoncello Administrator Telefon: +49 351 21 30 38 0 Fax: +49 351 21 30 38 99 E-Mail: l.bertoncello@queo-group.com queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077

On Fri, Sep 18, 2015 at 8:59 AM, Luca Bertoncello < L.Bertoncello@queo-group.com> wrote:
Hi Alex
2) My question was: "what can I do, so that in case of Kernel Panic or similar, the VM will be migrated (live or not) to another host?"
You would make the VMs HA and acquire a fencing solution.
What do you mean now? Have two VM and build a cluster? This is not what we want... If it's possible, I'd like to have more Host AS CLUSTER with migration of the VM between the nodes... I think oVirt already do that, since I have to create Clusters. For me a Cluster is not just "more nodes with the same CPU", but also something with load balancing or high availability...
You should also RTFM sooner or later.... ;-) http://www.ovirt.org/OVirt_Administration_Guide In particular for this HA related concepts: http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availab... and http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience HIH, Gianluca

Hello Gianluca,
You should also RTFM sooner or later.... ;-) http://www.ovirt.org/OVirt_Administration_Guide
In particular for this HA related concepts: http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availab... and http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience
Really, I don't think this documentation is good... I read this page many times and try to understand how the program works, but I have many doubt... In particular, the link you sent about HA: of course I read them and I configured the VMs with these settings, but really, I can't understand how can I set more host as a Cluster allowing automatically migration of the VMs when a host dies... Regards Mit freundlichen Grüßen Luca Bertoncello -- Besuchen Sie unsere Webauftritte: www.queo.biz Agentur für Markenführung und Kommunikation www.queoflow.com IT-Consulting und Individualsoftwareentwicklung Luca Bertoncello Administrator Telefon: +49 351 21 30 38 0 Fax: +49 351 21 30 38 99 E-Mail: l.bertoncello@queo-group.com queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077

On Fri, Sep 18, 2015 at 10:22 AM, Luca Bertoncello < L.Bertoncello@queo-group.com> wrote:
Hello Gianluca,
You should also RTFM sooner or later.... ;-) http://www.ovirt.org/OVirt_Administration_Guide
In particular for this HA related concepts:
http://www.ovirt.org/OVirt_Administration_Guide#Virtual_Machine_High_Availab...
and http://www.ovirt.org/OVirt_Administration_Guide#Host_Resilience
Really, I don't think this documentation is good... I read this page many times and try to understand how the program works, but I have many doubt...
The documentation pages are contributed, so you are encouraged to ask for a login and modify them when you have clarified the related aspects, so that the learning curve could be shorter for others. See the bottom of the page with "Log in / create account <http://www.ovirt.org/index.php?title=Special:UserLogin&returnto=OVirt+Administration+Guide> "
In particular, the link you sent about HA: of course I read them and I configured the VMs with these settings, but really, I can't understand how can I set more host as a Cluster allowing automatically migration of the VMs when a host dies...
Regards
Really? in the main page you have also: High Availability Considerations A highly available host requires a power management device and its fencing parameters configured. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: - Power management must be configured for the hosts running the highly available virtual machines. - The host running the highly available virtual machine must be part of a cluster which has other available hosts. - The destination host must be running. - The source and destination host must have access to the data domain on which the virtual machine resides. - The source and destination host must have access to the same virtual networks and VLANs. - There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. - There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements. What in particular is not clear and need better explanation? Gianluca

Really? in the main page you have also: High Availability Considerations A highly available host requires a power management device and its fencing parameters configured. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: • Power management must be configured for the hosts running the highly available virtual machines. • The host running the highly available virtual machine must be part of a cluster which has other available hosts. • The destination host must be running. • The source and destination host must have access to the data domain on which the virtual machine resides. • The source and destination host must have access to the same virtual networks and VLANs. • There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. • There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements.
What in particular is not clear and need better explanation?
Well, right now my problem is to understand how can I "simulate" this Power management. Maybe in the future, if we decide that oVirt is the right solution for us, we will buy some Power management system (APC or similar). But now, for the experiments, I can't ask my boss to pay >500€ for a device that maybe we will not use... By the way: I find really funny, that I can't define an Cluster with automatically migration of the VM without Power management. OK, I know what it can happens if the host is not really dead, but at least to allow that, with warnings and so on, should be nice... Regards Mit freundlichen Grüßen Luca Bertoncello -- Besuchen Sie unsere Webauftritte: www.queo.biz Agentur für Markenführung und Kommunikation www.queoflow.com IT-Consulting und Individualsoftwareentwicklung Luca Bertoncello Administrator Telefon: +49 351 21 30 38 0 Fax: +49 351 21 30 38 99 E-Mail: l.bertoncello@queo-group.com queo GmbH Tharandter Str. 13 01159 Dresden Sitz der Gesellschaft: Dresden Handelsregistereintrag: Amtsgericht Dresden HRB 22352 Geschäftsführer: Rüdiger Henke, André Pinkert USt-IdNr.: DE234220077

Well, right now my problem is to understand how can I "simulate" this Power management. Maybe in the future, if we decide that oVirt is the right solution for us, we will buy some Power management system (APC or similar). But now, for the experiments, I can't ask my boss to pay >500€ for a device that maybe we will not use...
GIven the outlay you will pay for your server kit regardless of using Ovirt or not, EUR 500 is nothing. As I said, what about ebay? I've just looked and there's a 24 outlet one for GBP 100, and an 8 port one for $44, both Buy It Now! And that's just a UK search.
By the way: I find really funny, that I can't define an Cluster with automatically migration of the VM without Power management. OK, I know what it can happens if the host is not really dead, but at least to allow that, with warnings and so on, should be nice...
Live migration works fine without power management. If a host gets too busy, VMs will migrate away from it. Alex .

On 18/09/15 07:59, Luca Bertoncello wrote:
Hi Alex
2) My question was: "what can I do, so that in case of Kernel Panic or similar, the VM will be migrated (live or not) to another host?"
You would make the VMs HA and acquire a fencing solution. What do you mean now? Have two VM and build a cluster? This is not what we want... If it's possible, I'd like to have more Host AS CLUSTER with migration of the VM between the nodes... I think oVirt already do that, since I have to create Clusters. For me a Cluster is not just "more nodes with the same CPU", but also something with load balancing or high availability...
No, use the built in HA in Ovirt. Which requires a fencing solution. This means that if a host goes down the VMs on it will autostart on another host in the cluster. Yes, it's a cluster of *hosts*. In fact in the Ovirt Interface you can see "clusters" in the tree view! We use the built in Ovirt HA for >200 VMs over 6 hosts and it works just fine. We lost a host fairly recently and only a couple of people (out of over 300) who use the services of a single VM noticed. Ovirt also supports balancing the VM load across hosts. That is part of the cluster policy. You can also set affinity and anti-affinity of VMs. You can also have hosts actually migrate off VMs when they are idle and the host to power down to save on your electricity bill. When load on other hosts reaches a limit, the host will be powered back on and VMs will start migrating to it.
3) I'd like to have a shutdown-script on the host that put the host in Maintenance and wait until it's done, so that I can just shutdown or reboot it without any other action. Is it possible? It would help to manage the power failure, too, assuming that other hosts have better UPS (it can be possible.) You could probably use the REST API on the Ovirt Engine for that.But it might be better to have a highly available machine (VM or not) running something like Nagios or Icinga which would perform the monitoring of your hosts and connect to the REST API to perform maintenance and shutdown. You might also consider a UPS service like NUT (unless you're already doing it). Well, I already use NUT and we have Icinga monitoring the hosts. But I can't understand what you mean and (more important!) how can I do it...
I checked the REST API and the CLI. I can write a little script to put an host in Maintenance or active it again, that's not the problem. The problem is to have somewhat starting this script automatically...
Any suggestion? Icinga will not be the solution, since it checks the host every 5 minutes or so, but we need to have this script started in seconds...
Thanks
Mit freundlichen Grüßen
Luca Bertoncello
The rest is out of scope of Ovirt. Icinga was just a suggestion. You don't have to use it - but you can change the check interval for a check to whatever you want. I was just stating that you should be wary of running your checks/scripts from a host if what you are trying to trigger on is that host going down or having issues. And if 5 minutes is too long maybe you need a bigger UPS? We have 1 hour battery runtime on ours at the moment (think it's a 160kVA central UPS offhand). Alex
-- This message is intended only for the addressee and may contain confidential information. Unless you are that person, you may not disclose its contents or use it in any way and are requested to delete the message along with any attachments and notify us immediately. "Transact" is operated by Integrated Financial Arrangements plc. 29 Clement's Lane, London EC4N 7AE. Tel: (020) 7608 4900 Fax: (020) 7608 5300. (Registered office: as above; Registered in England and Wales under number: 3727592). Authorised and regulated by the Financial Conduct Authority (entered on the Financial Services Register; no. 190856). .
participants (5)
-
Alex Crow
-
Gianluca Cecchi
-
Luca Bertoncello
-
matthew lagoe
-
Yaniv Kaul