On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani
<fernando.frediani(a)upx.com.br> wrote:
Hi Nir,
Thanks for clarification.
Answering your questions: The intent was to use a Posix like filesystem
similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how
the block storage is presented to multiple servers. Yes I heard about GFS2
escalation issues in the past, but thought it had been gone now a days, it
seems not.
I had the impression that qcow2 images have both thin-provisioning and
snapshot capabilities.
Yes, using file based storage, you have both snapshots and thin provisioning,
this is the most reliable way to get thin provisioning in ovirt.
But then you pay for the file system overhead, where in block storage the qemu
image is using the lv directly.
In block storage we use multipath, so if you have mutiple nics and networks,
you get better reliability and performance.
Regarding LVM I don't like the idea of having VMs buried into a
LVM volume
nor the idea of troubleshooting LVM volumes when necessary. Dealing with
qcow2 images for every VM separately makes things much easier for doing
several tasks. I would say that people coming from VMware would prefer to
deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to
say about LVM performance.
LVM has its own issues with many lvs on the same vg - we recommend to
use up to 350 lvs per vg. If you need more, you need to use another vg.
The best would be to try both and use the best storage for the particular
use case.
Nir