[ovirt-users] Resilient Storage for Ovirt

Yaniv Kaul ykaul at redhat.com
Mon Apr 2 10:59:51 UTC 2018


On Sat, Mar 24, 2018 at 3:55 AM, Vincent Royer <vincent at epicenergy.ca>
wrote:

> Hi,
>
> I have a 2 node cluster with Hosted Engine attached to a storage Domain
> (NFS share) served by WS2016.  I run about a dozen VMs.
>
> I need to improve availability / resilience of the storage domain, and
> also the I/O performance.
>
> Anytime we need to reboot the Windows Server, its a nightmare for the
> cluster, we have to put it all into maintenance and take it down.  When the
> Storage server crashes (has happened once) or Windows decides to install an
> update and reboot (has happened once), the storage domain obviously goes
> down and sometimes the hosts have a difficult time re-connecting.
>
> I can afford a second bare metal server and am looking for input in the
> best way to provide a highly available storage domain.  Ideally I'd like to
> be able to reboot either storage server without disrupting Ovirt. Should I
> be looking at clustering with Windows Server, or moving to a different OS?
>
> I currently run the Storage in RAID10 (spinning discs) and have the option
> of adding CacheCade to the array w/ SSD.  Would that help I/O for small
> random R/W?
>
> What are the suggested options for this scenario?
>

The easiest suggestion would be to move away from NFS. While NFS can be
made highly available (using pNFS and friends) and it's not that easy (nor
intuitive from oVirt).
iSCSI or FC are much better suited for the task, with multipathing and
iSCSI bonding (poor choice of terminology here).

You would need to use bonding (this time network bonding) and
highly-available NFS server (with a floating IP between the nodes most
likely) to succeed.
Y.


>
> Thanks
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20180402/1e427c45/attachment.html>


More information about the Users mailing list