[ovirt-users] Ovirt GusterFS assistance

Sahina Bose sabose at redhat.com
Mon Mar 23 10:02:33 UTC 2015


What is the type of volume that you've created? Is it a replicate volume?

# gluster volume info - should give you this information

If you're replicating the volume across 3 nodes, even when one of the 
server goes down, your storage domain should still be UP.

thanks
sahina
On 03/23/2015 02:10 PM, Jonathan Mathews wrote:
> Hi I am trying to setup an Ovirt, Glusterfs, VM servers. I have 
> followed examples on setting up Ovirt and they have helped me so far, 
> but not the end point that I am looking for.
> The web sites are:
> http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/
> http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/
> http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-storage-virt-kvm-rao.pdf
>
> I am running 3 HP micro servers and 2 HP DL360 G5
> The 3 micro servers are my glusterfs storage and have been provisioned 
> for virt storage.
> The 2 DL360 are my processing machines.
>
> Now my 3 gluster hosts are in one cluster, the volume is in up status 
> and has been provisioned for Virt Storage. But the problem is that my 
> mount point is directed to one server, so when that server goes down, 
> the volume storage domain goes down. I am not sure whether there is a 
> way of mounting it by a "volume identity", so when a server goes down 
> the storage domain stays up.
>
> With my 2 processing hosts, I have them in one cluster, but I have not 
> gotten any where with this, as I want the Virtual machines to use the 
> gluster volume as storage but use the processing hosts hardware for 
> processing power.
>
> I would appreciate any assistance.
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150323/ae2b930c/attachment-0001.html>


More information about the Users mailing list