Since we upgraded to the latest ovirt node running 7.2, we're seeing that
nodes become unavailable after a while. It's running fine, with a couple of
VM's on it, untill it becomes non responsive. At that moment it doesn't
even respond to ICMP. It'll come back by itself after a while, but oVirt
fences the machine before that time and restarts VM's elsewhere.
Engine tells me this message:
VDSM host09 command failed: Message timeout which can be caused by
Is anyone else experiencing these issues with ixgbe drivers? I'm running on
Intel X540-AT2 cards.
Met vriendelijke groeten / With kind regards,
On Tue, Jun 21, 2016 at 10:31 AM, Sven Kieske <s.kieske(a)mittwald.de> wrote:
> On 21/06/16 09:19, Yedidyah Bar David wrote:
>> Hi all,
>> oVirt 4.0 should be released real soon now, and also Fedora 24.
>> Currently, there are several different issues with oVirt on fedora 23,
>> both on engine side and on hosts.
>> We currently intend to release 4.0 without official support for
>> fedora, and hope to manage to stabilize things enough after Fedora 24
>> is out, so that oVirt 4.0.1 will support it.
>> Comments are welcome.
> You should announce this on the users list.
> In the past there where some users actually using fedora for deployments.
OK, moving the discussion from devel@ to users@.
I realized that I still have a process of creating a VM pool in the Tasks since...May 20...
How can I check if there is a stuck job or something still trying to do it ? If nothing is going on, how can I clear this from the event logs ?