[ovirt-users] Multiple Data Storage Domains

Gary Pedretty gary at ravnalaska.net
Sun Nov 20 04:55:05 UTC 2016


Solved:

Changing the second storage domain to a glusterfs Distributed Replicate with sharding turned on, works great.   Thanks for the solution.


Gary


------------------------------------------------------------------------
Gary Pedretty                                        gary at ravnalaska.net <mailto:gary at eraalaska.net>
Systems Manager                                          www.flyravn.com <http://www.flyravn.com/>
Ravn Alaska                           /\                    907-450-7251
5245 Airport Industrial Road         /  \/\             907-450-7238 fax
Fairbanks, Alaska  99709        /\  /    \ \ Second greatest commandment
Serving All of Alaska          /  \/  /\  \ \/\   “Love your neighbor as
Really loving the record green up date! Summmer!!   yourself” Matt 22:39
------------------------------------------------------------------------












> On Nov 7, 2016, at 2:10 AM, Sahina Bose <sabose at redhat.com> wrote:
> 
> 
> 
> On Mon, Nov 7, 2016 at 3:27 PM, Gary Pedretty <gary at ravnalaska.net <mailto:gary at ravnalaska.net>> wrote:
> [root at fai-kvm-1-gfs admin]# gluster volume status data2
> Status of volume: data2
> Gluster process                             TCP Port  RDMA Port  Online  Pid
> ------------------------------------------------------------------------------
> Brick fai-kvm-1-vmn.ravnalaska.net <http://fai-kvm-1-vmn.ravnalaska.net/>:/kvm2/gl
> uster/data2/brick                           49156     0          Y       3484
> Brick fai-kvm-2-vmn.ravnalaska.net <http://fai-kvm-2-vmn.ravnalaska.net/>:/kvm2/gl
> uster/data2/brick                           49156     0          Y       34791
> Brick fai-kvm-3-vmn.ravnalaska.net <http://fai-kvm-3-vmn.ravnalaska.net/>:/kvm2/gl
> uster/data2/brick                           49156     0          Y       177340
> Brick fai-kvm-4-vmn.ravnalaska.net <http://fai-kvm-4-vmn.ravnalaska.net/>:/kvm2/gl
> uster/data2/brick                           49152     0          Y       146038
> NFS Server on localhost                     2049      0          Y       40844
> Self-heal Daemon on localhost               N/A       N/A        Y       40865
> NFS Server on fai-kvm-2-gfs.ravnalaska.net <http://fai-kvm-2-gfs.ravnalaska.net/>  2049      0          Y       99905
> Self-heal Daemon on fai-kvm-2-gfs.ravnalask
> a.net <http://a.net/>                                       N/A       N/A        Y       99915
> NFS Server on fai-kvm-4-gfs.ravnalaska.net <http://fai-kvm-4-gfs.ravnalaska.net/>  2049      0          Y       176305
> Self-heal Daemon on fai-kvm-4-gfs.ravnalask
> a.net <http://a.net/>                                       N/A       N/A        Y       176326
> NFS Server on fai-kvm-3-gfs.ravnalaska.net <http://fai-kvm-3-gfs.ravnalaska.net/>  2049      0          Y       226271
> Self-heal Daemon on fai-kvm-3-gfs.ravnalask
> a.net <http://a.net/>                                       N/A       N/A        Y       226287
> 
> Task Status of Volume data2
> ------------------------------------------------------------------------------
> There are no active volume tasks
> 
> 
> [root at fai-kvm-1-gfs admin]# gluster volume info data2
> 
> Volume Name: data2
> Type: Striped-Replicate
> Volume ID: 20f85c9a-541b-4df4-9dba-44c5179bbfb0
> Status: Started
> Number of Bricks: 1 x 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: fai-kvm-1-vmn.ravnalaska.net <http://fai-kvm-1-vmn.ravnalaska.net/>:/kvm2/gluster/data2/brick
> Brick2: fai-kvm-2-vmn.ravnalaska.net <http://fai-kvm-2-vmn.ravnalaska.net/>:/kvm2/gluster/data2/brick
> Brick3: fai-kvm-3-vmn.ravnalaska.net <http://fai-kvm-3-vmn.ravnalaska.net/>:/kvm2/gluster/data2/brick
> Brick4: fai-kvm-4-vmn.ravnalaska.net <http://fai-kvm-4-vmn.ravnalaska.net/>:/kvm2/gluster/data2/brick
> Options Reconfigured:
> performance.readdir-ahead: on
> performance.quick-read: off
> performance.read-ahead: off
> performance.io <http://performance.io/>-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> storage.owner-uid: 36
> storage.owner-gid: 36
> 
> 
> See attached file for the mount log.
> 
> 
> Striped-Replicate is no longer supported in GlusterFS upstream. Instead, you should be using a Distribute-Replicate with sharding enabled. Also when using a gluster volume as storage domain, it is recommended to use replica 3.
> 
> From the mount logs, there is no indication as to why the volume is unmounted frequently. Could you try again with a replica 3 volume that has sharding enabled?
>  
> 
> Gary
> 
> 
> ------------------------------------------------------------------------
> Gary Pedretty                                        gary at ravnalaska.net <mailto:gary at eraalaska.net>
> Systems Manager                                          www.flyravn.com <http://www.flyravn.com/>
> Ravn Alaska                           /\                    907-450-7251
> 5245 Airport Industrial Road         /  \/\             907-450-7238 fax
> Fairbanks, Alaska  99709        /\  /    \ \ Second greatest commandment
> Serving All of Alaska          /  \/  /\  \ \/\   “Love your neighbor as
> Really loving the record green up date! Summmer!!   yourself” Matt 22:39
> ------------------------------------------------------------------------
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>> On Nov 6, 2016, at 9:50 PM, Sahina Bose <sabose at redhat.com <mailto:sabose at redhat.com>> wrote:
>> 
>> However your volume configuration seems suspect -"stripe 2 replica 2". Can you provide gluster volume info of your second storage domain gluster volume? The mount logs of the volume (under /var/log/glusterfs/rhev-datacenter..<volname>.log) from the host where the volume is being mounted will also help.
>>  
> 
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161119/52f30613/attachment-0001.html>


More information about the Users mailing list