Hi Nir,
yes indeed, we use the high-availability setup from oVirt for the
Glance/Cinder VM, hosted on a high-available gluster storage. For the DB
we use an SSD-backed Percona Cluster. The VM itself connects to the DB
cluster via haproxy, so we should have the full high-availability.
The problem with the VM is the first time when you start the oVirt
cluster, since you cannot start any VM using ceph volumes before you
start the Glance/Cinder VM. It's easy to be solved, though, and even if
you autostart all the machines they will automatically start in the
correct order.
Cheers,
Alessandro
Il 27/06/16 11:24, Nir Soffer ha scritto:
On Mon, Jun 27, 2016 at 12:02 PM, Alessandro De Salvo
<Alessandro.DeSalvo(a)roma1.infn.it> wrote:
> Hi,
> the cinder container is broken since a while, since when the kollaglue
> changed the installation method upstream, AFAIK.
> Also, it seems that even the latest ovirt 4.0 pulls down the "kilo"
version
> of openstack, so you will need to install yours if you need a more recent
> one.
> We are using a VM managed by ovirt itself for keystone/glance/cinder with
> our ceph cluster, and it works quite well with the Mitaka version, which is
> the latest one. The DB is hosted outside, so that even if we loose the VM we
> don't loose the state, besides all performance reasons. The installation is
> not using containers, but installing the services directly via
> puppet/Foreman.
> So far we are happily using ceph in this way. The only drawback of this
> setup is that if the VM is not up we cannot start machines with ceph volumes
> attached, but the running machines survives without problems even if the
> cinder VM is down.
Thanks for the info Alessandro!
This seems like the best way to run cinder/ceph, using other storage for
these vms, so cinder vm does not depend on the vm managing the
storage it runs on.
If you use highly available vms, ovirt will make sure they are up all the time,
and will migrated them to other hosts when needed.
Nir