------CHXD98MVMTN6WB8GSD0SPZVCG4IPUC
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
charset=UTF-8
Hi Mikhail,
Thank you for your suggestion.
Have you had any performance issues with freenas? It has been mentioned on some blogs that
freenas might have performance issues. Not sure why.
A clean Centos with NFS sounds ok also. What do you do if you need snapshots of data? Lvm
snapshots?
Alex
On December 23, 2016 5:08:42 PM EET, "Краснобаев Михаил" <milo1(a)ya.ru>
wrote:
Hi,
it mainly depends on the budget. I can give you some advice from my own
experience:
SMB systems from QNAP or any other vendor don't cope well with the load
that OVirt generates (simultaneous access), because they are usually
built on slow drives.
Using 15K drives helps a bit. I have a Centos machine that is used only
as a file storage (NFS, 4x15K drives in raid5).
I would suggest trying to built a Freenas machine on NL-SAS drives +
SSD cache. In my opinion it would the most cost efficient decision.
Best regards,
MIkhail
23.12.2016, 15:53, "rightkicktech.gmail.com" <rightkicktech(a)gmail.com>:
Hi all,
I am thinking to setup an environment with oVirt and centralized
storage using a NAS that supports NFS and iSCSI.
The setup will be used to host approx 20 VMs. The VMs will be running
critical services and not for testing. I have seen several from QNAP,
iXsystems (freenas mini), ...
What NAS would you recommend for this setup?
Thanx,
Alex
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.,
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--
С уважением, Краснобаев Михаил.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------CHXD98MVMTN6WB8GSD0SPZVCG4IPUC
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: 8bit
<html><head></head><body>Hi Mikhail,<br>
<br>
Thank you for your suggestion.<br>
Have you had any performance issues with freenas? It has been mentioned on some blogs that
freenas might have performance issues. Not sure why.<br>
A clean Centos with NFS sounds ok also. What do you do if you need snapshots of data? Lvm
snapshots?<br>
<br>
Alex<br><br><div class="gmail_quote">On December 23, 2016
5:08:42 PM EET, "Краснобаев Михаил" &lt;milo1(a)ya.ru&gt;
wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex;
border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div>Hi,</div><div> </div><div>it mainly depends on the
budget. I can give you some advice from my own
experience:</div><div> </div><div>SMB systems from QNAP or any
other vendor don't cope well with the load that OVirt generates (simultaneous access),
because they are usually built on slow drives.</div><div>Using 15K drives
helps a bit. I have a Centos machine that is used only as a file storage (NFS, 4x15K
drives in raid5).</div><div> </div><div>I would suggest trying to
built a Freenas machine on NL-SAS drives + SSD cache. In my opinion it would the most cost
efficient decision.</div><div> </div><div>Best
regards,</div><div> </div><div>MIkhail</div><div> </div><div>23.12.2016,
15:53, "rightkicktech.gmail.com"
<rightkicktech@gmail.com>:</div><blockquote
type="cite">Hi all,<br /><br />I am thinking to setup an
environment with oVirt and centralized storage using a NAS that supports NFS and
iSCSI.<br /><br />The setup will be used to host approx 20 VMs. The VMs will
be
running critical services and not for testing. I have seen several from QNAP, iXsystems
(freenas mini), ...<br />What NAS would you recommend for this setup?<br
/><br />Thanx,<br />Alex<br />--<br />Sent from my Android
device with K-9 Mail. Please excuse my
brevity.,<p>_______________________________________________<br />Users mailing
list<br /><a
href="mailto:Users@ovirt.org">Users@ovirt.org</a><br /><a
href="http://lists.ovirt.org/mailman/listinfo/users">http://...
уважением, Краснобаев
Михаил.</div><div> </div><div> </div></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my
brevity.</body></html>
------CHXD98MVMTN6WB8GSD0SPZVCG4IPUC--