[ovirt-devel] design flaw in ovirt
danken at redhat.com
Fri Jun 20 12:19:21 UTC 2014
On Fri, Jun 20, 2014 at 09:53:24AM +0000, Sven Kieske wrote:
> Am 20.06.2014 11:34, schrieb Dan Kenigsberg:
> > I do not quite understand the problem you describe.
> See below, I hope this clears some things up.
> > Does the problem go away if you set your network to "non-required"? If
> > your VM's app does not strictly require unintermitant networking, just
> > set the networ to non-required. Flipping the default of requiredness
> > can be considered only for 4.0 ("required" was our default since ever,
> > other users' scripts may depend on that).
> I know you can't simply switch the default behaviour, and yes, at least
> according to the documentation setting the network to "non-required"
> should mitigate the issue.
> however, the default is, to set every network as "required" without
> indicating this to the user in the first place (and the consequences
> of this not advertised behaviour).
> > Could you attach Vdsm and Engine logs to the bug? How was the host
> > fenced?
> the host was not fenced, the vms where fenced.
> here is a link to the documentation which should explain what I mean:
Are you refering to the paragraph: "When a required network becomes
non-operational, the virtual machines running on the network are fenced
and migrated to another host. This is beneficial if you have machines
running mission critical workloads."?
> this is about a single host in a cluster - ovirt can't even fence
> single hosts in a single cluster yet, see my other bug report for this:
> I could provide logs if they are really necessary, but I doubt they are.
> This is documented behaviour, but it is poorly designed, as described
> in the BZ.
Apparently, I am not familiar enough with Engine's fencing logic; logs
may help me understand the issue, for me they are necessary is this
case. In particular, I'd like to see with my own eyes whether the VMs
where explicitly destroyed by Engine. Migrating VMs to an operational
destination makes a lot of sense. Destroying a running VM in attempt
to recuperate of a host networking issue is extraordinary (and as such,
requires exraordinary evidence).
> The fencing mechanism is really buggy / not helpful, see
I vote for "buggy/imperfect". I am aware of mission-critical VMs that
are kept highly-available due to it.
> also this (not really related) bug:
More information about the Devel