<div dir="ltr">Recommended would be creating a new storage domain with shard size as 64 MB and migrating all the disks from 4MB storagedomain</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Sep 18, 2017 at 12:01 PM, Ravishankar N <span dir="ltr"><<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
Possibly. I don't think changing shard size on the fly is
supported, especially when there are files on the volume that are
sharded with a different size.<br>
<br>
-Ravi<div><div class="h5"><br>
<br>
<div class="m_-650125452528033745moz-cite-prefix">On 09/18/2017 11:40 AM, Alex K wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>The heal status is showing that no pending files need
healing (also shown at GUI). <br>
When checking the bricks on the file system I see that
what is different between the server is the .shard
folder of the volume. One server reports 835GB while the
other 1.1 TB. <br>
</div>
I recall to have changed the shard size at some point from
4 MB to 64MB. <br>
</div>
Could this be the cause?<br>
<br>
</div>
Thanx, <br>
</div>
Alex<br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Sep 18, 2017 at 8:14 AM,
Ravishankar N <span dir="ltr"><<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span> <br>
<div class="m_-650125452528033745m_-4884260715937754621moz-cite-prefix">On
09/18/2017 10:08 AM, Alex K wrote:<br>
</div>
<blockquote type="cite">
<div dir="auto">Hi Ravishankar,
<div dir="auto"><br>
</div>
<div dir="auto">I am not referring to the arbiter
volume(which is showing 0% usage). I am referring
to the other 2 volumes which are replicas and
should have the exact same data. Checking the
status of other bricks in ovirt (bricks used from
iso and export domain) I see that they all report
same usage of data on the data volumes, except the
"vms" volume used for storing vms.</div>
</div>
</blockquote>
<br>
</span> Ah, okay. Some of the things that can cause a
variation in disk usage:<br>
- Pending self-heals in gluster (check if `gluster volume
heal <volname> info` doesn't show any entries. Also
if there is anything under `.glusterfs/landfill` folder of
the bricks).<br>
- XFS speculative preallocation <br>
- Possibly some bug in self-healing of sparse files by
gluster (although we fixed known bugs a long time back in
this area).<br>
<br>
Regards<br>
Ravi<span><br>
<br>
<blockquote type="cite">
<div dir="auto">
<div dir="auto"><br>
</div>
<div dir="auto">Thanx,</div>
<div dir="auto">Alex</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Sep 18, 2017 07:00,
"Ravishankar N" <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>>
wrote:<br type="attribution">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<br>
<div class="m_-650125452528033745m_-4884260715937754621m_4192326122491279784moz-cite-prefix">On
09/17/2017 08:41 PM, Alex K wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div>
<div>
<div>Hi all, <br>
<br>
</div>
I have replica 3 with 1 arbiter. <br>
</div>
When checking the gluster volume
bricks they are reported as using
different space, as per attached. How
come they use different space? One
would expect to use exactly the same
space since they are replica. <br>
<br>
</div>
</div>
</div>
</blockquote>
The 3rd brick (arbiter ) only holds meta data,
so it would not consume as much space as the
other 2 data bricks. So what you are seeing is
expected behaviour.<br>
Regards,<br>
Ravi<br>
<blockquote type="cite">
<div dir="ltr">
<div>Thanx, <br>
</div>
Alex<br>
</div>
<br>
<fieldset class="m_-650125452528033745m_-4884260715937754621m_4192326122491279784mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Users mailing list
<a class="m_-650125452528033745m_-4884260715937754621m_4192326122491279784moz-txt-link-abbreviated" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>
<a class="m_-650125452528033745m_-4884260715937754621m_4192326122491279784moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a>
</pre>
</blockquote>
<br>
</div>
</blockquote>
</div>
</div>
</blockquote>
<br>
</span></div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div>