[ovirt-users] 4-node oVirt with replica-3 gluster
Goorkate, B.J.
b.j.goorkate at umcutrecht.nl
Wed Sep 28 07:12:09 UTC 2016
Hi,
I currently have a couple of VMs with little disk I/O, so I will
put them on the 4th node.
I can even use the 4th node to deploy a brick if one of the replica-3
nodes fails.
Thanks!
Regards,
Bertjan
On Wed, Sep 28, 2016 at 11:50:21AM +0530, Sahina Bose wrote:
>
>
> On Tue, Sep 27, 2016 at 8:59 PM, Goorkate, B.J. <b.j.goorkate at umcutrecht.nl>
> wrote:
>
> Hi Sahina,
>
> First: sorry for my delayed response. I wasn't able to respond earlier.
>
> I already planned on adding the 4th node as a gluster client, so thank you
> for
> confirming that this works.
>
> Why I was in doubt is that certain VMs with a lot of storage I/O on the 4th
> node have to
> replicate to 3 other hosts (the replica-3 gluster nodes) over the storage
> network, while
> a VM on 1 of the replica-3 gluster nodes only has to replicate to two other
> nodes over
> the network, thus creating less network traffic.
>
> Does this make sense?
>
> And if it does: can that be an issue?
>
>
> IIUC, the 4th node that you add to the cluster is serving only compute and
> there is no storage (bricks) capacity added. In this case, yes, all reads and
> writes are over the network - this is like a standard oVirt deployment where
> storage is over the network (non hyper converged).
> While thoeretically this looks like an issue, it may not be, as there are
> multiple factors affecting performance. You will need to measure the impact on
> guest performance when VMs run on this node and see if it is acceptable to you.
> One thing you could do is schedule VMs that do not have stringent perf
> requirements on the 4th node?
>
> There are also improvements planned in upcoming releases of gluster which
> improve the I/O performance further (compound FOPS, libgfapi access), so
> whatever you see now should improve further.
>
>
>
> Regards,
>
> Bertjan
>
> On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:
> >
> >
> > On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari <davide at billymob.com>
> wrote:
> >
> > I'm struggling with the same problem (I say struggling because I'm
> still
> > having stability issues for what i consider a stable cluster) but you
> can:
> > - create a replica 3 engine gluster volume
> > - create replica 2 data, iso and export volumes
> >
> >
> > What are the stability issues you're facing? Data volume if used as a
> data
> > storage domain should be a replica 3 volume as well.
> >
> >
> >
> > Deploy the hosted-engine on the first VM (with the engine volume)
> froom the
> > CLI, then log in Ovirt admin, enable gluster support, install *and
> deploy*
> > from the GUI host2 and host3 (where the engine bricks are) and then
> install
> > host4 without deploying. This should get you the 4 hosts online, but
> the
> > engine will run only on the first 3
> >
> >
> > Right. You can add the 4th node to the cluster, but not have any bricks
> on this
> > volume in which case VMs will be run on this node but will access data
> from the
> > other 3 nodes.
> >
> >
> >
> > 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. <b.j.goorkate at umcutrecht.nl
> >:
> >
> > Dear all,
> >
> > I've tried to find a way to add a 4th oVirt-node to my existing
> > 3-node setup with replica-3 gluster storage, but found no usable
> > solution yet.
> >
> > From what I read, it's not wise to create a replica-4 gluster
> > storage, because of bandwith overhead.
> >
> > Is there a safe way to do this and still have 4 equal oVirt
> nodes?
> >
> > Thanks in advance!
> >
> > Regards,
> >
> > Bertjan
> >
> > ------------------------------------------------------------
> > ------------------
> >
> > De informatie opgenomen in dit bericht kan vertrouwelijk zijn en
> is
> > uitsluitend bestemd voor de geadresseerde. Indien u dit bericht
> > onterecht
> > ontvangt, wordt u verzocht de inhoud niet te gebruiken en de
> afzender
> > direct
> > te informeren door het bericht te retourneren. Het Universitair
> Medisch
> > Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin
> van
> > de W.H.W.
> > (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat
> > geregistreerd bij
> > de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.
> >
> > Denk s.v.p aan het milieu voor u deze e-mail afdrukt.
> >
> > ------------------------------------------------------------
> > ------------------
> >
> > This message may contain confidential information and is intended
> > exclusively
> > for the addressee. If you receive this message unintentionally,
> please
> > do not
> > use the contents but notify the sender immediately by return
> e-mail.
> > University
> > Medical Center Utrecht is a legal person by public law and is
> > registered at
> > the Chamber of Commerce for Midden-Nederland under no. 30244197.
> >
> > Please consider the environment before printing this e-mail.
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
> >
> > --
> > Davide Ferrari
> > Senior Systems Engineer
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
> >
> >
>
>
More information about the Users
mailing list