Recommended would be creating a new storage domain with shard size as 64 MB
and migrating all the disks from 4MB storagedomain
On Mon, Sep 18, 2017 at 12:01 PM, Ravishankar N <ravishankar(a)redhat.com>
wrote:
Possibly. I don't think changing shard size on the fly is
supported,
especially when there are files on the volume that are sharded with a
different size.
-Ravi
On 09/18/2017 11:40 AM, Alex K wrote:
The heal status is showing that no pending files need healing (also shown
at GUI).
When checking the bricks on the file system I see that what is different
between the server is the .shard folder of the volume. One server reports
835GB while the other 1.1 TB.
I recall to have changed the shard size at some point from 4 MB to 64MB.
Could this be the cause?
Thanx,
Alex
On Mon, Sep 18, 2017 at 8:14 AM, Ravishankar N <ravishankar(a)redhat.com>
wrote:
>
> On 09/18/2017 10:08 AM, Alex K wrote:
>
> Hi Ravishankar,
>
> I am not referring to the arbiter volume(which is showing 0% usage). I am
> referring to the other 2 volumes which are replicas and should have the
> exact same data. Checking the status of other bricks in ovirt (bricks used
> from iso and export domain) I see that they all report same usage of data
> on the data volumes, except the "vms" volume used for storing vms.
>
>
> Ah, okay. Some of the things that can cause a variation in disk usage:
> - Pending self-heals in gluster (check if `gluster volume heal <volname>
> info` doesn't show any entries. Also if there is anything under
> `.glusterfs/landfill` folder of the bricks).
> - XFS speculative preallocation
> - Possibly some bug in self-healing of sparse files by gluster (although
> we fixed known bugs a long time back in this area).
>
> Regards
> Ravi
>
>
> Thanx,
> Alex
>
> On Sep 18, 2017 07:00, "Ravishankar N" <ravishankar(a)redhat.com>
wrote:
>
>>
>>
>> On 09/17/2017 08:41 PM, Alex K wrote:
>>
>> Hi all,
>>
>> I have replica 3 with 1 arbiter.
>> When checking the gluster volume bricks they are reported as using
>> different space, as per attached. How come they use different space? One
>> would expect to use exactly the same space since they are replica.
>>
>> The 3rd brick (arbiter ) only holds meta data, so it would not consume
>> as much space as the other 2 data bricks. So what you are seeing is
>> expected behaviour.
>> Regards,
>> Ravi
>>
>> Thanx,
>> Alex
>>
>>
>> _______________________________________________
>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users