Would restarting vdsm on the node in question help fix this? Again, all the VMs
are up on the node. Prior attempts to fix this problem have left the node in a
state where I can issue the "has been rebooted" command to it, it's
confused.
So... node is up. All VMs are up. Can't issue "has been rebooted" to the
node,
all VMs show Unknown and not responding but they are up.
Chaning the status is the ovirt db to 0 works for a second and then it goes
immediately back to 8 (which is why I'm wondering if I should restart vdsm on
the node).
Oddly enough, we're running all of this in production. So, watching it all go
down isn't the best option for us.
Any advice is welcome.
On 01/23/2018 03:58 PM, Christopher Cox wrote:
Like the subject says.. I tried to clear the status from the
vm_dynamic for a
VM, but it just goes back to 8.
Any hints on how to get things back to a known state?
I tried marking the node in maint, but it can't move the "Unknown" VMs, so
that
doesn't work. I tried rebooting a VM, that doesn't work.
The state of the VMs is up.... and I think they are running on the node they say
they are running on, we just have the Unknown problem with VMs on that one
node. So... can't move them, reboot VMs doens't fix....
Any trick to restoring state so that oVirt is ok???
(what a mess)