Disk Allocation policy changes after a snapshot

Hi I have linux VM with 2 disks one for OS is sparse the other is for a Database and is Preallocated. When I take a snapshot of the VM both disks change to sparse policy but the disks in the snapshot are 1 sparse and 1 allocated. Before the snapshot the VM was running fine, now it crashes when data is written to the database. When I delete the snapshot the disks go back to 1 sparse and 1 allocated. Has anyone else seen this happen. Ovirt is 4.3.2.1-1.el7 and it is running on a hostedEngine Many thanks Kevin

Hey Kevin, By design, when creating a snapshot, the new volume is created with 'sparse' allocation policy. I suggest you to open a bug since this operation should not crash the VM. Add this description and please add all relevant logs and any relevant information of your environment. Regards, Evelina On Tue, Aug 13, 2019 at 4:10 PM Kevin Doyle <kevin.doyle@manchester.ac.uk> wrote:
Hi I have linux VM with 2 disks one for OS is sparse the other is for a Database and is Preallocated. When I take a snapshot of the VM both disks change to sparse policy but the disks in the snapshot are 1 sparse and 1 allocated. Before the snapshot the VM was running fine, now it crashes when data is written to the database. When I delete the snapshot the disks go back to 1 sparse and 1 allocated. Has anyone else seen this happen. Ovirt is 4.3.2.1-1.el7 and it is running on a hostedEngine
Many thanks Kevin _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OAT5MIMTIZUUHE...

On Thu, Aug 15, 2019 at 10:35 AM Evelina Shames <eshames@redhat.com> wrote:
Hey Kevin, By design, when creating a snapshot, the new volume is created with 'sparse' allocation policy. I suggest you to open a bug since this operation should not crash the VM. Add this description and please add all relevant logs and any relevant information of your environment.
Regards, Evelina
On Tue, Aug 13, 2019 at 4:10 PM Kevin Doyle <kevin.doyle@manchester.ac.uk> wrote:
Hi I have linux VM with 2 disks one for OS is sparse the other is for a Database and is Preallocated. When I take a snapshot of the VM both disks change to sparse policy but the disks in the snapshot are 1 sparse and 1 allocated. Before the snapshot the VM was running fine, now it crashes when data is written to the database. When I delete the snapshot the disks go back to 1 sparse and 1 allocated. Has anyone else seen this happen. Ovirt is 4.3.2.1-1.el7 and it is running on a hostedEngine
Many thanks Kevin _______________________________________________
Hi, some clarifications needed: what kind of storage are you using? If block based (iSCSI or FC-SAN) I verified problems on sparse allocated disks and databases (Oracle in my case) during high I/O on datafiles. So, as you did, I used preallocated for data based disks. For fine tuning of automatic LVM extensions in case of block based storage domains, see also this 2017 thread: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/S3LXEJV3V4CIOTQ... not currently using it tohugh with recent versions of oVirt, so I have no "fresh" information about efficiency, depending on I/O load amount HIH anyway, Gianluca

On Thu, Aug 15, 2019 at 8:30 PM Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
On Thu, Aug 15, 2019 at 10:35 AM Evelina Shames <eshames@redhat.com> wrote:
Hey Kevin, By design, when creating a snapshot, the new volume is created with 'sparse' allocation policy. I suggest you to open a bug since this operation should not crash the VM. Add this description and please add all relevant logs and any relevant information of your environment.
Regards, Evelina
On Tue, Aug 13, 2019 at 4:10 PM Kevin Doyle <kevin.doyle@manchester.ac.uk> wrote:
Hi I have linux VM with 2 disks one for OS is sparse the other is for a Database and is Preallocated. When I take a snapshot of the VM both disks change to sparse policy but the disks in the snapshot are 1 sparse and 1 allocated. Before the snapshot the VM was running fine, now it crashes when data is written to the database. When I delete the snapshot the disks go back to 1 sparse and 1 allocated. Has anyone else seen this happen. Ovirt is 4.3.2.1-1.el7 and it is running on a hostedEngine
Many thanks Kevin _______________________________________________
Hi, some clarifications needed: what kind of storage are you using? If block based (iSCSI or FC-SAN) I verified problems on sparse allocated disks and databases (Oracle in my case) during high I/O on datafiles.
What kind of problems did you have? do we have a bug for this?
So, as you did, I used preallocated for data based disks.
For best performance, we always recommended preallocated disks. I think you will get best results with direct LUN for applications that needs best performance.
For fine tuning of automatic LVM extensions in case of block based storage domains, see also this 2017 thread:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/S3LXEJV3V4CIOTQ... not currently using it tohugh with recent versions of oVirt, so I have no "fresh" information about efficiency, depending on I/O load amount HIH anyway, Gianluca _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/Q6SKLJQHNYBOLD...

On Fri, Aug 16, 2019 at 12:37 AM Nir Soffer <nsoffer@redhat.com> wrote: [snip]
Hi,
some clarifications needed: what kind of storage are you using? If block based (iSCSI or FC-SAN) I verified problems on sparse allocated disks and databases (Oracle in my case) during high I/O on datafiles.
What kind of problems did you have? do we have a bug for this?
The problems I described in the referred thread of my previous answer: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/S3LXEJV3V4CIOTQ... VM has been paused due to storage I/O problem Managing the vdsm related parameters: volume_utilization_percent and volume_utilization_chunk_mb mitigated the probability of occurrence but didn't solve definitely. I didn't open a bugzilla for that
So, as you did, I used preallocated for data based disks.
For best performance, we always recommended preallocated disks. I think you will get best results with direct LUN for applications that needs best performance.
And I did it, as you recommended, despite not optimized storage allocation, but solving the VM paused error. And this was also the reason why I didn't open a bugzilla, supposing the answer would have been to use preallocated and with thin provisioning not to be used in this use case scenario... Thanks, Gianluca

Hi I agree with everyone about using preallocated disks for DB and that is the setup I have, the issue I am raising is that when you create a snapshot of a preallocated disk it changes to a sparse disk, because of this change in policy the VM crashes under heavy writes. I modified vdsm.conf and added [irs] volume_utilization_percent = 25 volume_utilization_chunk_mb = 2048 This has cured the VM crashing under intense writes after I have created a snapshot. My setup is using iSCSI disks
participants (4)
-
Evelina Shames
-
Gianluca Cecchi
-
Kevin Doyle
-
Nir Soffer