[ovirt-users] How to add a Gluster storage domain on hyper-converged?
Sahina Bose
sabose at redhat.com
Thu Dec 24 01:05:57 EST 2015
On 12/24/2015 11:26 AM, Sahina Bose wrote:
>
>
> On 12/24/2015 08:24 AM, Will Dennis wrote:
>>
>> Hi all,
>>
>> I have a three-node hyper-converged oVirt datacenter running; now I
>> need to add my first storage domain. I had prepped for this before
>> installing oVirt by creating two distributed Gluster volumes with 3x
>> replicas (one for the hosted engine, one for VM storage) –
>>
>> [root at ovirt-node-01 ~]# gluster volume info | grep -e "Name" -e
>> "Type" -e "Number"
>>
>> Volume Name: engine
>>
>> Type: Distributed-Replicate
>>
>> Number of Bricks: 2 x 3 = 6
>>
>> Volume Name: vmdata
>>
>> Type: Distributed-Replicate
>>
>> Number of Bricks: 2 x 3 = 6
>>
>
> Do you have 2 bricks on each node for the engine volume?
>
>
>> Now I’d like to use the “vmdata” volume for my storage domain. When
>> in webadmin I select “New Domain” I get a dialog that lets me select
>> GlusterFS as the storage type, but then requires a “Use host:”
>> setting, and a path. Is it possible for me to select one of my oVirt
>> hosts (they all have the ‘vmdata’ volume), and then use
>> “localhost:/vmdata” for the path? Or will this not work?
>>
> Use host -> use any of the hosts.
> path -> <host1>:/data
> Enter mount options -> backup-volfile-servers=<host2>:<host3>
>
>
> In 3.6, I think the mount options are automatically appended with
> backup-volfile-servers. But providing this info here will also work.
> The bakup-volfile-servers help in accessing the gluster volume when
> the host1 that was used to mount the volume goes down, and any of the
> other hosts needs to remount the gluster volume
>
>
>
>> I know this isn’t officially supported yet, but if I can get it to
>> work somehow, that’d be great :) It’s a non-production (PoC) setup,
>> so the cost of failure should be low... That said, I don’t want to
>> trash my rig and have to redo the whole thing all over ;)
>>
Which version of glusterfs are you using?
If you're trying this out for a POC, we recommend that you enable
sharding on the gluster volumes (available from glusterfs 3.7.6 -
http://gluster.readthedocs.org/en/release-3.7.0/Features/shard/). This
will ensure that when self-heal happens on the gluster volume, the
process does not hog CPU and completes relatively faster.
>> Thanks,
>>
>> Will
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151224/1f9919d9/attachment.html>
More information about the Users
mailing list