[ovirt-users] size of VM image files varies on different gluster bricks

Alastair Neil ajneil.tech at gmail.com
Wed May 27 21:28:23 UTC 2015


Hi

I have a hosted engine cluster running 3.5.2 on f20.  I have 6 nodes
running centos 6.6 and three storage nodes also running centos 6.6 with
gluster 3.6.3,

My primary data store is a replica 3 gluster volume.  I noticed that the
size of some image files differs wildly on one server's brick.  Disks are
all thin provisioned.  The bricks are thin provisioned lvm volume with xfs
file systems.   The only difference between the systems is that the problem
node is newer, a Dell R530 with MD1400 where as the other two are Dell
R510's each with MD1200s.  The storage arrays all have the same 4TB disks.

e.g. for a disk that the ovirt console repost as having virtual size 500G
and actual size 103G I see:


[root at gluster0 479d2197-de09-4012-8183-43c6baa7e65b]# cd
> ../d0d58fb9-ecaa-446f-bc42-dd681a16aee2/
> [root at gluster0 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 106G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta
> [root at gluster1 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# pwd
>
> /export/brick5/ovirt-data/54d9ee82-0974-4a72-98a5-328d2e4007f1/images/d0d58fb9-ecaa-446f-bc42-dd681a16aee2
> [root at gluster1 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 103G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta
> [root at gluster-2 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# pwd
>
> /export/brick5/ovirt-data/54d9ee82-0974-4a72-98a5-328d2e4007f1/images/d0d58fb9-ecaa-446f-bc42-dd681a16aee2
> [root at gluster-2 d0d58fb9-ecaa-446f-bc42-dd681a16aee2]# du -sh *
> 501G c1b70bf0-c750-4177-8485-7b981e1f21a3
> 1.0M c1b70bf0-c750-4177-8485-7b981e1f21a3.lease
> 4.0K c1b70bf0-c750-4177-8485-7b981e1f21a3.meta


I'd appreciate any suggestions about troubleshooting and resolving his. Here
is the volume info:

Volume Name: data
> Type: Replicate
> Volume ID: 5c6ff46d-1159-4c7e-8b16-5ffeb15cbaf9
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster-2:/export/brick5/ovirt-data
> Brick2: gluster1:/export/brick5/ovirt-data
> Brick3: gluster0:/export/brick5/ovirt-data
> Options Reconfigured:
> performance.least-prio-threads: 4
> performance.low-prio-threads: 16
> performance.normal-prio-threads: 24
> performance.high-prio-threads: 24
> performance.io-thread-count: 32
> diagnostics.count-fop-hits: off
> diagnostics.latency-measurement: off
> auth.allow: *
> nfs.rpc-auth-allow: *
> network.remote-dio: on
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> storage.owner-gid: 36
> storage.owner-uid: 36
> cluster.eager-lock: enable
> cluster.min-free-disk: 5%
> cluster.rebalance-stats: on
> cluster.background-self-heal-count: 16
> cluster.readdir-optimize: on
> cluster.metadata-self-heal: on
> cluster.data-self-heal: on
> cluster.entry-self-heal: on
> cluster.self-heal-daemon: on
> cluster.heal-timeout: 500
> cluster.self-heal-window-size: 8
> cluster.data-self-heal-algorithm: diff
> cluster.quorum-type: auto
> cluster.self-heal-readdir-size: 64KB
> network.ping-timeout: 20
> performance.open-behind: disable
> cluster.server-quorum-ratio: 51%
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150527/2c868dee/attachment-0001.html>


More information about the Users mailing list