[ovirt-users] Protecting the storage of the self hosted engine

Daniel Helgenberger daniel.helgenberger at m-box.de
Wed Nov 5 17:57:35 UTC 2014



On 05.11.2014 15:30, wodel youchi wrote:
> Hi, I am new on oirt.
Hello and welcome!

>
> I want to know the best way to protect the storage of the hosted-engine?
IHMO reliable hardware and contingency plans.

>
> In Ovirt3.5, only NFS and iSCSI are supported for the engine VM, so this means
> that the NFS server or the iSCSI volume become the weak link.
First we need to define 'weak link'. IMOH this can be Network and / or 
storage hardware like controllers and spindles (SSDs). As the later 
tends to be reliable I think your mean indeed the data link layer as 
'weak link'?

Gluster can be the same weak link for instance, as it needs a network 
layer. If you use iSCSI together with some storage appliance maybe with 
redundant controllers, this setup is quite reliable and engine storage 
is protected if you use iSCSI Multipath (and the paths are indeed 
separate hardware switches).

You could call NFS a weak link; but even there are quite reliable setups 
available witch support fail over, replicated storage and IPMP.

>
> I've read two articles, one using GlsuterFS+NFS and CTDB for High availability
> of the engine storage,: oVirt 3.4, Glusterized
I have to warn you at this point. This setup seems quite tempting; even 
using localhost addresses with gluster's build in NFS. This was tried 
before (myself included) but it is far from stable.

You would at least need repica 3 gluster volumes to avoid split brains. 
These seem to happen quite often. I include Martin here, we talked about 
this at the ovirt workshop in Düsseldorf; maybe he can provide a better 
explanation.

AFAIK gluster will be supported as engine storage in the future; but 
this is not the case right now. Of course, you are welcome to try!

That said, and because you are new to ovirt, the main thing you need to 
protect is not the engine storage, but your production data domains.

The VMs will run fine and continue to run with the engine down or not 
available. In case of a real disaster, you will be able to import these 
storage domains along with their VMs to a new engine.

For me, I tend to have my engine backed up using engine-backup [1] and 
put the result to a different storage. From that data you can recreate 
the whole engine.

> <http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/>,
>
> 	
> image <http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/>
> 	
> 	
> 	
> 	
> 	
> oVirt 3.4, Glusterized
> <http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/>
> oVirt's Hosted Engine feature, introduced in the project's 3.4 release, enables
> the open source virtualization system to host its own management server, which
> means...
> Afficher sur community.redha...
> <http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/>
> 	
> Aperçu par Yahoo
>
>
> And another using GluserFS+NFS and KeepAlive: How to workaround through the maze
> and reach the goal of the new amazing oVirt Hosted Engine with 3.4.0 Beta |
> <http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/>
> 	
> 	
> 	
> 	
> How to workaround through the maze and reach the goal of the new amazing oVirt
> Hosted Engine with 3....
> <http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/>
> andrewklau - My Small World of Rants, Excitement, Snippets and Tuts How to
> workaround through the maze and reach the goal of the new amazing oVirt Hosted
> Engine with 3.4.0 Beta
> Afficher sur www.andrewklau.com
> <http://www.andrewklau.com/ovirt-hosted-engine-with-3-4-0-nightly/>
> 	
> Aperçu par Yahoo
>
>
> Is there a better way to achieve this goal?
>
> Thanks
>
HTH

[1] http://www.ovirt.org/Ovirt-engine-backup

-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767



More information about the Users mailing list