<p dir="ltr"><br>
On Jun 14, 2016 5:37 PM, "Fernando Frediani" <<a href="mailto:fernando.frediani@upx.com.br">fernando.frediani@upx.com.br</a>> wrote:<br>
><br>
> Hi Nir,<br>
><br>
> I wouldn't say that the performance coming from LVM is significantly better than from a filesystem if the last is well built. In VMware the performance from a VMDK running on the top of VMFS5 and from a RDM has no significant gain one over another. I've always preferred to have machines in a filesystem for the ease of management. In some cases with hundreds of them in a single filesystem never faced performance issues. The bottleneck normally is down to the storage architecture (Storage Controller, RAID config, etc).<br>
><br>
> The multipath is certainly a plus that helps in certain cases.<br>
></p>
<p dir="ltr">Extended scalability (200 node clusters) and no bottlenecks around scsi3 pr are another couple of pluses.</p>
<p dir="ltr">> I guess the answer to my original question is clear. If I want to use block storage shared among different hosts there is no choice in oVirt other than LVM.<br>
> In a particular case I have a storage shared via a kind of internal SAS backplane to all servers. The only alternative to that would be dedicate a server to own the storage and export it as NFS, but in that case there would be some looses in the terms of hardware an reliability.</p>
<p dir="ltr">If your SAS storage is exposed to multiple hosts and presents the same wwid to all clients you can set it up as fc, as long as multipath can detect it. DDAS like dell md3000 works great this way.</p>
<p dir="ltr">><br>
> Thanks<br>
> Fernando<br>
><br>
><br>
> On Tue, Jun 14, 2016 at 11:23 PM, Fernando Frediani <<a href="mailto:fernando.frediani@upx.com.br">fernando.frediani@upx.com.br</a>> wrote:<br>
>>><br>
>>> Hi Nir,<br>
>>> Thanks for clarification.<br>
>>><br>
>>> Answering your questions: The intent was to use a Posix like filesystem<br>
>>> similar to VMFS5 (GFS2, OCFS2, or other) where you have no choice for how<br>
>>> the block storage is presented to multiple servers. Yes I heard about GFS2<br>
>>> escalation issues in the past, but thought it had been gone now a days, it<br>
>>> seems not.<br>
>>><br>
>>> I had the impression that qcow2 images have both thin-provisioning and<br>
>>> snapshot capabilities.<br>
>><br>
>> Yes, using file based storage, you have both snapshots and thin provisioning,<br>
>> this is the most reliable way to get thin provisioning in ovirt.<br>
>><br>
>> But then you pay for the file system overhead, where in block storage the qemu<br>
>> image is using the lv directly.<br>
>><br>
>> In block storage we use multipath, so if you have mutiple nics and networks,<br>
>> you get better reliability and performance.<br>
>><br>
>>> Regarding LVM I don't like the idea of having VMs buried into a LVM volume<br>
>>> nor the idea of troubleshooting LVM volumes when necessary. Dealing with<br>
>>> qcow2 images for every VM separately makes things much easier for doing<br>
>>> several tasks. I would say that people coming from VMware would prefer to<br>
>>> deal with a VMDK rather than a RDM LUN. In the other hand I have nothing to<br>
>>> say about LVM performance.<br>
>><br>
>> LVM has its own issues with many lvs on the same vg - we recommend to<br>
>> use up to 350 lvs per vg. If you need more, you need to use another vg.<br>
>><br>
>> The best would be to try both and use the best storage for the particular<br>
>> use case.<br>
>><br>
>> Nir<br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</p>