[ovirt-users] ovirt and glusterfs setup
Donny D
donny at cloudspin.me
Wed Feb 18 15:21:08 EST 2015
I did not have a good experience putting both gluster and virt on the same node. I was doing hosted engine with replicate across two nodes, and one day it went into split brain hell... i was never able to track down why. However i do have a gluster with distribute and replica setup on its own with a couple nodes, and it has given me zero problems in the last 60 days. It seems to me that gluster and virt need to stay seperate for now. Both are great products and both work as described, just not on the same node at the same time.
Happy Connecting. Sent from my Sprint Samsung Galaxy S® 5
-------- Original message --------
From: George Skorup <george at mwcomm.com>
Date: 02/18/2015 12:50 PM (GMT-07:00)
To: users at ovirt.org
Subject: Re: [ovirt-users] ovirt and glusterfs setup
Bill,
I have done exactly what you're looking to do. I was trying to mimic
vSAN as well. They had VSA for a while which was acceptable
licensing costs, but that was replaced with vSAN which is
ridiculously expensive for an extra-small business.
I have a four node cluster with 1TB of storage each. Gluster is
configured with replica 4. So basically I have 1TB of usable
storage, which is fine for my needs. VM migration, Gluster
replication, hosted engine and all that works fine. Performance is
generally fine, even with only dual LACP bonded 1GbE NICs in each
node. I can do what I want with networking to fit our NOC and office
network environment.
I have been playing with this for about three weeks. Over the
weekend, I had a handful of VMs running, including the hosted
engine. They were pretty much sitting idle doing nothing. I came in
Monday and found everything offline. No power outages,
network/switch didn't fail or reboot, none of the hardware reset.
What I found happened was that glusterd went nuts on two of the
nodes (I have no idea why). Gluster was spitting out logs like
crazy, /var filled up, then RAM and swap was depleted. Two gluster
processes/hosts offline out of the four meant quorum broke,
everything came to a halt. I was unable to recover the gluster logs
since they had to be deleted to free up space in /var.
I was able to get everything fixed and back online in about 2-1/2
hours. So this is impossible to put into production. Storage is the
weakest link and mostly likely to fail, which in this case it did.
I do not recommend this configuration at all. A dedicated machine
for the engine, dedicated hypervisors and dedicated storage nodes or
a SAN is needed for anything beyond experimentation.
I have zero knowledge of VMWare's vSAN, other than basic concepts,
so I cannot say the results would be similar.
On 2/18/2015 5:32 AM, Bill Dossett
wrote:
<!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
-->
Hi,
I’m in at the deep end…
Looking for some advice on if this is
possible – and what version I should try I with if so.
We are a heavily VMware oriented company…
but I am trying to get RHEV/oVirt in the door. Honestly I
would prefer oVirt, but if they insist on enterprise support I
go to Redhat.
So I have had a play with VMware VSAN and
know how it works. I am trying to more or less replicate this
setup using open source.
So VSAN uses 3 or more ESXi nodes, with
local harddisks as the storage medium as a virtual SAN. The
SAN performance is quite high as you put at least 1 SSD in
each node.
What is nice is you don’t have the NAS
element here. VMs are on the local storage, which is
partially SSD, so performance is quite good.
I went to a Redhat presentation on Redhat
storage and GlusterFS and basically this acts as a big
software defined NAS which does some pretty cool things, but
that’s not exactly what I need.
I would like to build oVirt on top of
CentOS Oss… that have local storage in them that is
distributed and redundant in the event of a node failure.
And I probably need to try and build this
in a lab under Fusion on my Mac Book pro to begin with anyway
(that bit I’m farily confident with) and if I get that working
can probably get some older kit to try it out for real as a
PoC to few people.
So, I’ve set up oVirt before that should be
ok, I haven’t setup Gluster is there any documents that would
help me down this road and make sure I start out using the
best version.
Any advice or pointers would be gratefully
received.
Thanks
Bill
Dossett
Systems
Architect
Tech
Central – Global Engineering Services
T +44
(0)1923 279353
M +44
(0)777 590 8612
bill.dossett at pb.com
pitneybowes.com
Pitney
Bowes
6
Hercules Way | Leavesden | WD25 7GS | UK
Learn more about Global
Engineering Services
In
Engineering?
Raise
a ticket via Remedy Anywhere [HERE] takes less than a
minute
CloudForms
User Guide available [HERE]
_______________________________________________
Users mailing list
Users at ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150218/1515d808/attachment-0001.html>
More information about the Users
mailing list