On March 28, 2020 11:03:54 AM GMT+02:00, Gianluca Cecchi <gianluca.cecchi(a)gmail.com>
wrote:
On Sat, Mar 28, 2020 at 8:39 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
wrote:
> On March 28, 2020 3:21:45 AM GMT+02:00, Gianluca Cecchi <
> gianluca.cecchi(a)gmail.com> wrote:
>
>
[snip]
>Actually it only happened with empty disk (thin provisioned) and
sudden
> >high I/O during the initial phase of install of the OS; it didn't
> >happened
> >then during normal operaton (even with 600MB/s of throughput).
>
[snip]
> Hi Gianluca,
>
> Is it happening to machines with preallocated disks or on machines
with
> thin disks ?
>
> Best Regards,
> Strahil Nikolov
>
thin provisioned. But as I have tro create many VMs with 120Gb of disk
size
of which probably only a part during time will be allocated, it would
be
unfeasible to make them all preallocated. I learned that thin is not
good
for block based storage domains and heavy I/O, but I would hope that it
is
not the same with file based storage domains...
Thanks,
Gianluca
This is normal - gluster cannot allocate fast enough the needed shards (due to high IO),
so the qemu pauses the VM until storage is available again .
You can think about VDO (with deduplication ) as a PV for the Thin LVM and this way you
can preallocate your VMs , while saving space (deduplication, zero-block elimination and
even compression).
Of course, VDO will reduce performance (unless you have battery-backed write cache and
compression is disabled), but tbe benefits will be alot more.
Another approach is to increase the shard size - so gluster will create fewer shards,
but allocation on disk will be higher.
Best Regards,
Strahil Nikolov