[ovirt-users] How to create a new Gluster volume

knarra knarra at redhat.com
Fri Jul 7 08:15:45 UTC 2017


On 07/06/2017 04:38 PM, Gianluca Cecchi wrote:
> On Thu, Jul 6, 2017 at 11:51 AM, Gianluca Cecchi 
> <gianluca.cecchi at gmail.com <mailto:gianluca.cecchi at gmail.com>> wrote:
>
>     Hello,
>     I'm trying to create a new volume. I'm in 4.1.2
>     I'm following these indications:
>     http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/
>     <http://www.ovirt.org/documentation/admin-guide/chap-Working_with_Gluster_Storage/>
>
>     When I click the "add brick" button, I don't see anything in
>     "Brick Directory" dropdown field and I cannot manuall input a
>     directory name.
>
>     On the 3 nodes I already have formatted and mounted fs
>
>     [root at ovirt01 ~]# df -h /gluster/brick3/
>     Filesystem                  Size  Used Avail Use% Mounted on
>     /dev/mapper/gluster-export   50G   33M   50G   1% /gluster/brick3
>     [root at ovirt01 ~]#
>
>     The guide tells
>
>     7. Click the Add Bricks button to select bricks to add to the
>     volume. Bricks must be created externally on the Gluster Storage
>     nodes.
>
>     What does it mean with "created externally"?
>     The next step from os point would be volume creation but it is
>     indeed what I would like to do from the gui...
>
>     Thanks,
>     Gianluca
>
>
> It seems I have to de-select the checkbox "Show available bricks from 
> host" and so I can manually the the directory of the bricks
I see that bricks are mounted in /gluster/brick3 and that is the reason 
it does not show anything in "Brick Directory" drop down filed. If 
bricks are mounted under /gluster_bricks then it would have detected 
automatically. There is an RFE which is raised to detect bricks which 
are created manually.
>
> BTW: I see that after creating a volume optimized for oVirt in web 
> admin gui of 4.1.2 I get slight option for it in respect for a 
> pre-existing volume created in 4.0.5 during initial setup with gdeploy.
>
> NOTE: during 4.0.5 setup I had gluster 3.7 installed, while now I have 
> gluster 3.10 (manually updated from CentOS storage SIG)
>
> Making a "gluster volume info" and then a diff of the output for the 2 
> volumes I have:
>
> new volume ==   <
> old volume  ==    >
>
> < cluster.shd-max-threads: 8
> ---
> > cluster.shd-max-threads: 6
> 13a13,14
> > features.shard-block-size: 512MB
> 16c17
> < network.remote-dio: enable
> ---
> > network.remote-dio: off
> 23a25
> > performance.readdir-ahead: on
> 25c27
> < server.allow-insecure: on
> ---
> > performance.strict-o-direct: on
>
> Do I have to change anything for the newly created one?
No, you do not need to change anything for the new volume. But if you 
plan to enable o-direct on the volume then you will have to disable/turn 
off remote-dio.

>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170707/2d8bd644/attachment.html>


More information about the Users mailing list