<!DOCTYPE html>
<html><head>
<meta charset="UTF-8">
</head><body><p>This matches about with what we were thinking, thank you!</p><p>To answer your questions</p><p>We do not have power management configured due to it causing a cascading failure early in our deployment.  The host was not fenced and "confirm host rebooted" was never used.  The VMs were powered on via virsh (this shouldn't have happened)</p><p>The way they were powered on is most likely why they were corrupted is our thought</p><p><br></p><p>Logan</p><blockquote type="cite"><div id="ox-3145df7df0" style="word-wrap: break-word;" class="">On September 20, 2017 at 12:03 PM Michal Skrivanek <michal.skrivanek@redhat.com> wrote:<br><br><br class=""><div><blockquote type="cite"><div class="">On 20 Sep 2017, at 18:06, Logan Kuhn <<a href="mailto:support@jac-properties.com" class="">support@jac-properties.com</a>> wrote:</div><br class="ox-3145df7df0-Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class="">We had an incident where a VM hosts' disk filled up, the VMs all went unknown in the web console, but were fully functional if you were to login or use the services of one.</div></div></div></blockquote><div><br class=""></div><div>Hi,</div>yes, that can happen since the VM’s storage is on NAS whereas the server itself is non-functional as the management and all other local processes are using local resources</div><div><br class=""><blockquote type="cite"><div class=""><div dir="ltr" class=""><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class="">  We couldn't migrate them so we powered them down on that host and powered them up and let ovirt choose the host for it, same as always. </div></div></div></blockquote><div><br class=""></div><div>that’s a mistake. The host should be fenced in that case, you likely do not have a power management configured, do you? Even when you do not have a fencing device available it should have been resolved manually by rebooting it  manually(after fixing the disk problem), or in case of permanent damage (e.g. server needs to be replaced, that takes a week, you need to run those VMs in the meantime elsewhere) it should have been powered off and VM states should be reset by “confirm host has been rebooted” manual action.</div><div><br class=""></div><div>Normally you should now be able to run those VMs while the status of the host is still Not Responding - was it not the case? How exactly you get to the situation that you were able to power up the VMs?</div><div><br class=""></div><div><br class=""></div><blockquote type="cite"><div class=""><div dir="ltr" class=""><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class="">However the disk image on a few of them were corrupted because once we fixed the host with the full disk, it still thought it should be running the VM.  Which promptly corrupted the disk, the error seems to be this in the logs:</div></div></div></blockquote><div><br class=""></div>this can only happen for VMs flagged as HA, is it a case?</div><div><br class=""></div><div><div>Thanks,</div><div>michal</div><div class=""><br class=""></div><blockquote type="cite"><div class=""><div dir="ltr" class=""><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class=""><br class=""></div><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class=""><span style="font-family: monospace;" class=""><span style="background-color: #ffffff;" class=""><span class="ox-3145df7df0-gmail-Object" id="ox-3145df7df0-gmail-OBJ_PREFIX_DWT446_com_zimbra_date" style="color: #6f1616;"><span class="ox-3145df7df0-gmail-Object" id="ox-3145df7df0-gmail-OBJ_PREFIX_DWT447_com_zimbra_date" style="cursor: pointer;">2017-09-19</span></span> 21:59:11,058 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler3) [36c806f6] VM '70cf75c7-0fc2-4bbe-958e-7d0095f70960'(testhub) is </span><span style="font-weight: bold; color: #ff5454; background-color: #ffffff;" class="">running</span><span style="background-color: #ffffff;" class=""> in db and not </span><span style="font-weight: bold; color: #ff5454; background-color: #ffffff;" class="">running</span><span style="background-color: #ffffff;" class=""> on VDS 'ef6dc2a3-af6e-4e00-aa4</span><br class="">0-493b31263417'(vm-int7)<br class=""></span></div><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class=""><br class=""></div><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class="">We upgraded to 4.1.6 from 4.0.6 earlier in the day, I don't really think it's anything more than coincidence, but it's worrying enough to send to the community.</div><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class=""><br class=""></div><div style="font-family: arial; font-size: 16px; background-color: #fdfdfd;" class="">Regards,<br class="">Logan</div></div>_______________________________________________<br class="">Users mailing list<br class=""><a href="mailto:Users@ovirt.org" class="">Users@ovirt.org</a><br class="">http://lists.ovirt.org/mailman/listinfo/users<br class=""></div></blockquote></div><br class=""></div></blockquote></body></html>