On Thu, Feb 2, 2017 at 1:34 PM, Gianluca Cecchi
<gianluca.cecchi(a)gmail.com> wrote:
On Thu, Feb 2, 2017 at 12:09 PM, Yaniv Kaul <ykaul(a)redhat.com>
wrote:
>
>
>
>>
>>
>> I decided to switch to preallocated for further tests and confirm
>> So I created a snapshot and then a clone of the VM, changing allocation
>> policy of the disk to preallocated.
>> So far so good.
>>
>> Feb 2, 2017 10:40:23 AM VM ol65preallocated creation has been completed.
>> Feb 2, 2017 10:24:15 AM VM ol65preallocated creation was initiated by
>> admin@internal-authz.
>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM
'ol65' has
>> been completed.
>> Feb 2, 2017 10:22:31 AM Snapshot 'for cloning' creation for VM
'ol65' was
>> initiated by admin@internal-authz.
>>
>> so the throughput seems ok based on this storage type (the LUNs are on
>> RAID5 made with sata disks): 16 minutes to write 90Gb is about 96MBytes/s,
>> what expected
>
>
> What is your expectation? Is it FC, iSCSI? How many paths? What is the IO
> scheduler in the VM? Is it using virtio-blk or virtio-SCSI?
> Y.
>
Peak bandwith no more than 140 MBytes/s, based on storage capabilities, but
I don't have to do a rude performance test. I need stability
Hosts has a mezzanine dual-port HBA (4 Gbit); each HBA connected to a
different FC-switch and the multipath connection has 2 active paths (one for
each HBA).
I confirm that with preallocated disk of the cloned VM I don't have indeed
the previous problems.
The same loop executed for about 66 times in a 10 minutes interval without
any problem registered on hosts
No message at all in /var/log/messages of both hosts.
My storage domain not compromised
It remains important the question about thin provisioning and SAN LUNs (aka
with LVM based disks).
In my opinion I shouldn't care of the kind of I/O made inside a VM and
anyway it shouldn't interfere with my storage domain, bringing down
completely my hosts/VMs.
In theory there could be an application inside a VM that generates something
similar to my loop and so would generate problems.
For sure I can then notify VM responsible about his/her workload, but it
should not compromise my virtual infrastructure
I could have an RDBMS inside a VM and a user that creates a big datafile and
that should imply many extend operations if the disk is thin provisioned....
What about [irs] values? Where are they located, in vdsm.conf?
Yes but you should not modify them in vdsm.conf.
What are defaults for volume_utilization_percent and
volume_utilization_chunk_mb?
Did they change from 3.6 to 4.0 to 4.1?
No, the defaults did not change in the last 3.5 years.
In 4.0 we introduced dropin support, and this is the recommend
way to perform configuration changes.
To change these values, you create a file at
/etc/vdsm/vdsm.conf.d/50_my.conf
The name of the file does not matter, vdsm will read all files in
the vdsm.conf.d directory, sort them by name (this is way you
should use 50_ prefix), and apply the changes to the configuration.
In this file you put the sections and options you need, like:
[irs]
volume_utilization_percent = 25
volume_utilization_chunk_mb = 2048
What I should do after changing them to make them active?
Restart vdsm
Using these method, you can provision the same file on all hosts
using standard provisioning tools.
It is not recommended to modify /etc/vdsm/vdsm.conf. If you do
this you will have to manually merge changes from vdsm.conf.rpmnew
after upgrading vdsm.
Nir