On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox <ccox(a)endlessnow.com> wrote:
Would restarting vdsm on the node in question help fix this? Again,
all the
VMs are up on the node. Prior attempts to fix this problem have left the
node in a state where I can issue the "has been rebooted" command to it,
it's confused.
So... node is up. All VMs are up. Can't issue "has been rebooted" to the
node, all VMs show Unknown and not responding but they are up.
Chaning the status is the ovirt db to 0 works for a second and then it goes
immediately back to 8 (which is why I'm wondering if I should restart vdsm
on the node).
It's not recommended to change db manually.
Oddly enough, we're running all of this in production. So, watching it all
go down isn't the best option for us.
Any advice is welcome.
We would need to see the node/engine logs, have you found any error in
the vdsm.log
(from nodes) or engine.log? Could you please share the error?
Probably it's time to think to upgrade your environment from 3.6.
On 01/23/2018 03:58 PM, Christopher Cox wrote:
>
> Like the subject says.. I tried to clear the status from the vm_dynamic
> for a
> VM, but it just goes back to 8.
>
> Any hints on how to get things back to a known state?
>
> I tried marking the node in maint, but it can't move the "Unknown" VMs,
so
> that
> doesn't work. I tried rebooting a VM, that doesn't work.
>
> The state of the VMs is up.... and I think they are running on the node
> they say
> they are running on, we just have the Unknown problem with VMs on that one
> node. So... can't move them, reboot VMs doens't fix....
>
> Any trick to restoring state so that oVirt is ok???
>
> (what a mess)
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
Cheers
Douglas