This is a multi-part message in MIME format.
--------------4D1C1D1600247E037C4E23F3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Possibly. I don't think changing shard size on the fly is supported,
especially when there are files on the volume that are sharded with a
different size.
-Ravi
On 09/18/2017 11:40 AM, Alex K wrote:
The heal status is showing that no pending files need healing (also
shown at GUI).
When checking the bricks on the file system I see that what is
different between the server is the .shard folder of the volume. One
server reports 835GB while the other 1.1 TB.
I recall to have changed the shard size at some point from 4 MB to 64MB.
Could this be the cause?
Thanx,
Alex
On Mon, Sep 18, 2017 at 8:14 AM, Ravishankar N <ravishankar(a)redhat.com
<mailto:ravishankar@redhat.com>> wrote:
On 09/18/2017 10:08 AM, Alex K wrote:
> Hi Ravishankar,
>
> I am not referring to the arbiter volume(which is showing 0%
> usage). I am referring to the other 2 volumes which are replicas
> and should have the exact same data. Checking the status of other
> bricks in ovirt (bricks used from iso and export domain) I see
> that they all report same usage of data on the data volumes,
> except the "vms" volume used for storing vms.
Ah, okay. Some of the things that can cause a variation in disk
usage:
- Pending self-heals in gluster (check if `gluster volume heal
<volname> info` doesn't show any entries. Also if there is
anything under `.glusterfs/landfill` folder of the bricks).
- XFS speculative preallocation
- Possibly some bug in self-healing of sparse files by gluster
(although we fixed known bugs a long time back in this area).
Regards
Ravi
>
> Thanx,
> Alex
>
> On Sep 18, 2017 07:00, "Ravishankar N" <ravishankar(a)redhat.com
> <mailto:ravishankar@redhat.com>> wrote:
>
>
>
> On 09/17/2017 08:41 PM, Alex K wrote:
>> Hi all,
>>
>> I have replica 3 with 1 arbiter.
>> When checking the gluster volume bricks they are reported as
>> using different space, as per attached. How come they use
>> different space? One would expect to use exactly the same
>> space since they are replica.
>>
> The 3rd brick (arbiter ) only holds meta data, so it would
> not consume as much space as the other 2 data bricks. So what
> you are seeing is expected behaviour.
> Regards,
> Ravi
>> Thanx,
>> Alex
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>>
http://lists.ovirt.org/mailman/listinfo/users
>> <
http://lists.ovirt.org/mailman/listinfo/users>
>
--------------4D1C1D1600247E037C4E23F3
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Possibly. I don't think changing shard size on the fly is
supported, especially when there are files on the volume that are
sharded with a different size.<br>
<br>
-Ravi<br>
<br>
<div class="moz-cite-prefix">On 09/18/2017 11:40 AM, Alex K
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CABMULt+xHcaiMbR=kdpqrMY2231_kByEVEcmfL=RmXShJ6YCPg@mail.gmail.com">
<div dir="ltr">
<div>
<div>
<div>
<div>The heal status is showing that no pending files need
healing (also shown at GUI). <br>
When checking the bricks on the file system I see that
what is different between the server is the .shard
folder of the volume. One server reports 835GB while the
other 1.1 TB. <br>
</div>
I recall to have changed the shard size at some point from
4 MB to 64MB. <br>
</div>
Could this be the cause?<br>
<br>
</div>
Thanx, <br>
</div>
Alex<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Sep 18, 2017 at 8:14 AM,
Ravishankar N <span dir="ltr"><<a
href="mailto:ravishankar@redhat.com" target="_blank"
moz-do-not-send="true">ravishankar(a)redhat.com</a>&gt;</span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span
class=""> <br>
<div class="m_-4884260715937754621moz-cite-prefix">On
09/18/2017 10:08 AM, Alex K wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Hi Ravishankar,
<div dir="auto"><br>
</div>
<div dir="auto">I am not referring to the arbiter
volume(which is showing 0% usage). I am referring
to the other 2 volumes which are replicas and
should have the exact same data. Checking the
status of other bricks in ovirt (bricks used from
iso and export domain) I see that they all report
same usage of data on the data volumes, except the
"vms" volume used for storing vms.</div>
</div>
</blockquote>
<br>
</span> Ah, okay. Some of the things that can cause a
variation in disk usage:<br>
- Pending self-heals in gluster (check if `gluster volume
heal <volname> info` doesn't show any entries. Also
if there is anything under `.glusterfs/landfill` folder of
the bricks).<br>
- XFS speculative preallocation <br>
- Possibly some bug in self-healing of sparse files by
gluster (although we fixed known bugs a long time back in
this area).<br>
<br>
Regards<br>
Ravi<span class=""><br>
<br>
<blockquote type="cite">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">Thanx,</div>
<div dir="auto">Alex</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sep 18, 2017 07:00,
"Ravishankar N" <<a
href="mailto:ravishankar@redhat.com"
target="_blank"
moz-do-not-send="true">ravishankar(a)redhat.com</a>&gt;
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<br>
<div
class="m_-4884260715937754621m_4192326122491279784moz-cite-prefix">On
09/17/2017 08:41 PM, Alex K wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>Hi all, <br>
<br>
</div>
I have replica 3 with 1 arbiter. <br>
</div>
When checking the gluster volume
bricks they are reported as using
different space, as per attached. How
come they use different space? One
would expect to use exactly the same
space since they are replica. <br>
<br>
</div>
</div>
</div>
</blockquote>
The 3rd brick (arbiter ) only holds meta data,
so it would not consume as much space as the
other 2 data bricks. So what you are seeing is
expected behaviour.<br>
Regards,<br>
Ravi<br>
<blockquote type="cite">
<div dir="ltr">
<div>Thanx, <br>
</div>
Alex<br>
</div>
<br>
<fieldset
class="m_-4884260715937754621m_4192326122491279784mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Users mailing list
<a
class="m_-4884260715937754621m_4192326122491279784moz-txt-link-abbreviated"
href="mailto:Users@ovirt.org" target="_blank"
moz-do-not-send="true">Users(a)ovirt.org</a>
<a class="m_-4884260715937754621m_4192326122491279784moz-txt-link-freetext"
href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank"
moz-do-not-send="true">http://lists.ovirt.org/mailman<wbr...
</pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------4D1C1D1600247E037C4E23F3--