Each shard is a separate file of size equal to value of "features.shard-block-size".
So when a brick/node was down, only those shards belonging to the VM that were modified will be sync'd later when the brick's back up.
Does that answer your question?

-Krutika

On Wed, Mar 27, 2019 at 7:48 PM Sahina Bose <sabose@redhat.com> wrote:
On Wed, Mar 27, 2019 at 7:40 PM Indivar Nair <indivar.nair@techterra.in> wrote:
>
> Hi Strahil,
>
> Ok. Looks like sharding should make the resyncs faster.
>
> I searched for more info on it, but couldn't find much.
> I believe it will still have to compare each shard to determine whether there are any changes that need to be replicated.
> Am I right?

+Krutika Dhananjay
>
> Regards,
>
> Indivar Nair
>
>
>
> On Wed, Mar 27, 2019 at 4:34 PM Strahil <hunter86_bg@yahoo.com> wrote:
>>
>> By default ovirt uses 'sharding' which splits the files into logical chunks. This greatly reduces healing time, as VM's disk is not always completely overwritten and only the shards that are different will be healed.
>>
>> Maybe you should change the default shard size.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> On Mar 27, 2019 08:24, Indivar Nair <indivar.nair@techterra.in> wrote:
>>
>> Hi All,
>>
>> We are planning a 2 + 1 arbitrated mirrored Gluster setup.
>> We would have around 50 - 60 VMs, with an average 500GB disk size.
>>
>> Now in case one of the Gluster Nodes go completely out of sync, roughly, how long would it take to resync? (as per your experience)
>> Will it impact the working of VMs in any way?
>> Is there anything to be taken care of, in advance, to prepare for such a situation?
>>
>> Regards,
>>
>>
>> Indivar Nair
>>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-leave@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/WZW5RRVHFRMAIBUZDUSTXTIF4Z4WW5Y5/