Nic,
I didn’t see what version of gluster you were running? There was a leak that caused
similar behavior for me in early 6.x versions, but it was fixed in 6.6 (I think, you’d
have to find it in the bugzilla to be sure) and I havn’t seen this in a while. Not sure
it’s exactly your symptoms (mine would pause after a while running, not immediately), but
might be worth checking on.
-Darrell
On Mar 28, 2020, at 12:26 PM, Nir Soffer <nsoffer(a)redhat.com>
wrote:
On Sat, Mar 28, 2020 at 1:59 PM Strahil Nikolov <hunter86_bg(a)yahoo.com
<mailto:hunter86_bg@yahoo.com>> wrote:
>
> On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
>> On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov <hunter86_bg(a)yahoo.com>
>> wrote:
>>
>>> On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi <
>>> gianluca.cecchi(a)gmail.com> wrote:
>>>
>>>
>> [snip]
>>
>>> Actually it only happened with empty disk (thin provisioned) and
>> sudden
>>>> high I/O during the initial phase of install of the OS; it didn't
>>>> happened
>>>> then during normal operaton (even with 600MB/s of throughput).
>>>
>>
>> [snip]
>>
>>
>>> Hi Gianluca,
>>>
>>> Is it happening to machines with preallocated disks or on machines
>> with
>>> thin disks ?
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>
>> thin provisioned. But as I have tro create many VMs with 120Gb of disk
>> size
>> of which probably only a part during time will be allocated, it would
>> be
>> unfeasible to make them all preallocated. I learned that thin is not
>> good
>> for block based storage domains and heavy I/O, but I would hope that it
>> is
>> not the same with file based storage domains...
>> Thanks,
>> Gianluca
>
> This is normal - gluster cannot allocate fast enough the needed shards (due to high
IO), so the qemu pauses the VM until storage is available again .
I don't know glusterfs internals, but I think this is very unlikely.
For block storage thin provisioning in vdsm, vdsm is responsible for allocating
more space, but vdsm is not in the datapath, it is monitoring the allocation and
allocate more data when free space reaches a limit. It has no way to block I/O
before more space is available. Gluster is in the datapath and can
block I/O until
it can process it.
Can you explain what is the source for this theory?
> You can think about VDO (with deduplication ) as a PV for the Thin LVM and this way
you can preallocate your VMs , while saving space (deduplication, zero-block elimination
and even compression).
> Of course, VDO will reduce performance (unless you have battery-backed write cache
and compression is disabled), but tbe benefits will be alot more.
>
> Another approach is to increase the shard size - so gluster will create fewer
shards, but allocation on disk will be higher.
>
> Best Regards,
> Strahil Nikolov
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
<
https://www.ovirt.org/privacy-policy.html>
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77DYUF7A5D6...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/77DYUF7A5D6...
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
<
https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<
https://www.ovirt.org/community/about/community-guidelines/>
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LC5HGDMXJP...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2LC5HGDMXJP...