[ovirt-users] VMWare VSAN like setup with oVirt

Anantha Raghava raghav at exzatechconsulting.com
Tue Jan 31 08:32:14 UTC 2017


Thanks Nicholas for quick reply. Will attempt and revert in case of I 
get stuck midway.

-- 

Thanks & Regards,


Anantha Raghava

eXzaTech Consulting And Services Pvt. Ltd.


DISCLAIMER:
This e-mail communication and any attachments may be privileged and 
confidential to eXza Technology Consulting & Services, and are intended 
only for the use of the recipients named above If you are not the 
addressee you may not copy, forward, disclose or use any part of it. If 
you have received this message in error, please delete it and all copies 
from your system and notify the sender immediately by return e-mail. 
Internet communications cannot be guaranteed to be timely, secure, error 
or virus-free. The sender does not accept liability for any errors or 
omissions.


Do not print this e-mail unless required. Save Paper & trees.

On Tuesday 31 January 2017 01:59 PM, Nicolas Ecarnot wrote:
> Le 31/01/2017 à 09:15, Anantha Raghava a écrit :
>> Hi,
>>
>> We are trying to create a setup that uses the internal disks of the
>> hosts / nodes, yet provide the high availability, replication and
>> failover using oVirt. The setup we are typing to build is close to
>> VMWare VSAN which allows for all the above just using the internal disks
>> of the ESXi servers.
>>
>> Can we achieve something similar with oVirt with Gluster?
>
> Absolutely. One of our oVirt setup is done this way.
> Three hosts are set up as glusterFS servers (replica-3), as well as 
> oVirt nodes.
> We choose to add a fourth host as an standalone engine, but you can 
> choose to use a VM for that (hyperconverge setup).
>
> I have no experience on similar setup with a random number of nodes, 
> neither if this can be achievable (some kind of network RAID-10)... (?)
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170131/96874576/attachment.html>


More information about the Users mailing list