[ovirt-devel] Default virt lock manager and Sanlock for Hosted Engine HA

Nir Soffer nsoffer at redhat.com
Tue Mar 24 07:49:31 UTC 2015


----- Original Message -----
> From: "Sandro Bonazzola" <sbonazzo at redhat.com>
> To: "Daniel P. Berrange" <berrange at redhat.com>
> Cc: "Oved Ourfalli" <ovedo at redhat.com>, devel at ovirt.org, "Michal Skrivanek" <mskrivan at redhat.com>
> Sent: Tuesday, March 24, 2015 9:10:26 AM
> Subject: Re: [ovirt-devel] Default virt lock manager and Sanlock for Hosted	Engine HA
> 
> Il 23/03/2015 23:37, Daniel P. Berrange ha scritto:
> > On Mon, Mar 23, 2015 at 12:00:12PM +0100, Sandro Bonazzola wrote:
> >> Hi,
> >> while working on a patch for hosted engine[1] it has been raised an
> >> objection about limiting the use of sanlock as lock manager to the Hosted
> >> Engine
> >> VM only.
> >>
> >> "the ha should dynamically set libvirt if possible, or we need to also
> >> revert this part if disabled.
> >> also, if ever vdsm will require to configure this file we will have a race
> >> which is not healthy.
> >> it seems that you can define domain and per domain setting and assign your
> >> vm to that domain instead of effecting entire host."
> >>
> >> But looking at Fedora Virt Lock Manager feature[2] looks like we should
> >> configure libvirt for using sanlock by default and not only for Hosted
> >> Engine HA.
> >>
> >> If we decide that we should keep the default lockd manager instead of the
> >> sanlock one I wpuld like to ask virt team which is the best way for
> >> setting
> >> sanlock only for the hosted engine VM.
> >> I've seen I can change the libvirt xml using a vdsm hook and pass command
> >> line options to qemu[3] but I can't see an option to pass to qemu-kvm for
> >> that.
> >>
> >> There was also an ongoing discussion about locking per disk / per vm on
> >> the storage team part, so maybe this is not the best way to get this done
> >> after all.
> >>
> >> What do you think / suggest?
> > 
> > Libvirt has a pluggable lock manager engine, of which there are two impls,
> > one using virtlockd and one using sanlock. Libvirt does not currently
> > enable any lock manager plugin by default, but our intention is that a
> > future release will in fact enable virtlockd by default.
> > 
> > The reason for this is that virtlockd to be a more broadly usable and
> > flexible solution, in particular we can make it essentially zero config
> > for admin. While libvirt did attempt to provide a zero config mode for
> > the sanlock plugin, by creating lease files on shared filesystems, the
> > sanlock maintainers explicitly recommend against this usage scenario.
> > It also has significant I/O performance and storage implications, and
> > so is not really satisfactory for deployment without knowledgable
> > admins or an mgmt app such as VDSM to actively manage it usage.
> > 
> > As such our recommendation is that sanlock should be enabled only when
> > explicitly required on the host by the mgmtm application being deployed.
> > ie only when oVirt/VSM is enabled on the host.
> 
> Thanks for your input Daniel!
> So we're going to try to limit sanlock usage to the hosted engine vm only.
> VDSM team, any chance I can tell VDSM to create the Hosted Engine VM and use
> the sanlock lock manager for that using the create verb in vdscli?
> Any hint on what to write within the libvirt xml config for telling to use
> the sanlock lock manager?

I don't think we support this. If you run vms with ovirt, they will use
sanlock.

We don't have any plan to support other configuration.

Can you explain why you want to use another lock manager for other
vms?

Nir



More information about the Devel mailing list