No subject


Sun Jan 20 21:10:45 EST 2013


rs, combined into a gluster volume, and used as a POSIX data center.=20

I don't think anyone would disagree about that being a bad idea. In the RHE=
V documentation, we try and make it clear that one of the weaknesses of usi=
ng storage that is local to they hypervisors is that when something happens=
, you lose two pieces of infrastructure in one shot, rather than one. I thi=
nk that for the availability you are talking about, you'll want at least th=
ree nodes.=20

I've cc'd John Walker, the Gluster Community Guy (whom I met recently at LC=
A, g'day John!), he can probably say something specific.=20

Tim Hildred, RHCE
Content Author II - Engineering Content Services, Red Hat, Inc.
Brisbane, Australia
Email: thildred at redhat.com
Internal: 8588287
Mobile: +61 4 666 25242
IRC: thildred

----- Original Message -----
> From: "Adrian Gibanel" <adrian.gibanel at btactic.com>
> To: "users" <users at ovirt.org>
> Sent: Friday, January 25, 2013 9:35:42 PM
> Subject: [Users] Glusterfs HA doubts
>=20
> In oVirt 3.1 GlusterFS support was added. It was an easy way to
> replicate your virtual machine storage without too much hassle.
> There are two main howtos:
> *
> http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-usi=
ng-either-nfs-or-posix-native-file-system-engine
> (Robert Middleswarth)
> * http://blog.jebpages.com/archives/ovirt-3-1-glusterized/ (Jason
> Brooks).
>=20
> 1) What about performance?
> I've done some tests with rsync backups (even using the suggested
> --inplace rsync switch) that implies small files. These backups were
> done into local mounted glusterfs volumes. Backups instead of
> lasting about 2 hours they lasted like 15 hours long.
>=20
> Is there maybe something that only happens with small files and with
> big files performance is ok?
>=20
> 2) How to know the current status?
> In DRBD you know it checking a proc file if I remember it well. I
> remember too that GlusterFS doesn't have an equivalent thing and
> there's no evident way to know if all the files are synced.
>=20
> If you have tried it how do you know if both sets of virtual disks
> images are synced?
>=20
> 3) Mount dns resolution
> If you check Jason Brooks howto you will see that it uses a hostname
> for refering to nfs mount. If you want to perform HA you need your
> storage to be mounted and if the server1 host is down it doesn't
> help that the nfs mount point associated to the storage is
> server1:/vms/ and not server2:/vms/. Checking Middleswarth howto I
> think that he does the same thing.
>=20
> Let's explain a bit more so that understand. My example setup is the
> one where you have two host machines where you run a set of virtual
> machines on one and the other one doesn't have any virtual machine
> running. Where is the virtual machines storage located? It's located
> at the glusterfs volume.
>=20
> So the first one of the machines mounts the glusterfs volume as nfs
> (It's an example).
> If it uses its own hostname for the nfs mount then if itself goes
> down the second host isn't going to mount it when it's restarted in
> the HA mode.
>=20
> So the first one of the machines mounts the glusterfs volume as nfs
> (It's an example).
> If it uses the second host hostname for the nfs mount then if the
> second host goes down the virtual machine cannot access its virtual
> disks.
>=20
> A workaround for this situation which I have thought is to use
> /etc/hosts on both machines so that:
>   whatever.domain.com
> resolves in both hosts to the host self's ip.
>=20
> I think that glusterfs has a way of mounting their share through "-t
> glusterfs" that somehow can ignore these hostnames problems but I
> haven't read it too much about it so I'm not too sure.
>=20
> 4) So my doubts basically are:
>=20
>   * Has anyone setup a two host glusterfs HA oVirt cluster where
>   storage is shared by a replicated Glusterfs volume that is shared
>   and stored by both of them?
>   * Does HA work when one of the host goes down?
>   * Or does it complain about hostname as I suspect?
>   * Any tips to ensure the best performance?
>=20
> Thank you.
>=20
> --
>=20
> --
> Adri=C3=A1n Gibanel
> I.T. Manager
>=20
> +34 675 683 301
> www.btactic.com
>=20
>=20
>=20
> Ens podeu seguir a/Nos podeis seguir en:
>=20
> i
>=20
>=20
> Abans d=C2=B4imprimir aquest missatge, pensa en el medi ambient. El medi
> ambient =C3=A9s cosa de tothom. / Antes de imprimir el mensaje piensa en
> el medio ambiente. El medio ambiente es cosa de todos.
>=20
> AVIS:
> El contingut d'aquest missatge i els seus annexos =C3=A9s confidencial. S=
i
> no en sou el destinatari, us fem saber que est=C3=A0 prohibit
> utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autoritzaci=C3=B3
> corresponent. Si heu rebut aquest missatge per error, us agrairem
> que ho feu saber immediatament al remitent i que procediu a destruir
> el missatge .
>=20
> AVISO:
> El contenido de este mensaje y de sus anexos es confidencial. Si no
> es el destinatario, les hacemos saber que est=C3=A1 prohibido utilizarlo,
> divulgarlo y/o copiarlo sin tener la autorizaci=C3=B3n correspondiente.
> Si han recibido este mensaje por error, les agradecer=C3=ADamos que lo
> hagan saber inmediatamente al remitente y que procedan a destruir el
> mensaje .
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>=20


More information about the Users mailing list