<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 27, 2016 at 8:59 PM, Goorkate, B.J. <span dir="ltr"><<a href="mailto:b.j.goorkate@umcutrecht.nl" target="_blank">b.j.goorkate@umcutrecht.nl</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Sahina,<br>
<br>
First: sorry for my delayed response. I wasn't able to respond earlier.<br>
<br>
I already planned on adding the 4th node as a gluster client, so thank you for<br>
confirming that this works.<br>
<br>
Why I was in doubt is that certain VMs with a lot of storage I/O on the 4th node have to<br>
replicate to 3 other hosts (the replica-3 gluster nodes) over the storage network, while<br>
a VM on 1 of the replica-3 gluster nodes only has to replicate to two other nodes over<br>
the network, thus creating less network traffic.<br>
<br>
Does this make sense?<br>
<br>
And if it does: can that be an issue?<br></blockquote><div><br></div><div>IIUC, the 4th node that you add to the cluster is serving only compute and there is no storage (bricks) capacity added. In this case, yes, all reads and writes are over the network - this is like a standard oVirt deployment where storage is over the network (non hyper converged).<br></div><div>While thoeretically this looks like an issue, it may not be, as there are multiple factors affecting performance. You will need to measure the impact on guest performance when VMs run on this node and see if it is acceptable to you. One thing you could do is schedule VMs that do not have stringent perf requirements on the 4th node?<br><br></div><div>There are also improvements planned in upcoming releases of gluster which improve the I/O performance further (compound FOPS, libgfapi access), so whatever you see now should improve further.<br></div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Regards,<br>
<br>
Bertjan<br>
<div><div><br>
On Fri, Sep 23, 2016 at 04:47:25PM +0530, Sahina Bose wrote:<br>
><br>
><br>
> On Fri, Sep 23, 2016 at 4:14 PM, Davide Ferrari <<a href="mailto:davide@billymob.com" target="_blank">davide@billymob.com</a>> wrote:<br>
><br>
> I'm struggling with the same problem (I say struggling because I'm still<br>
> having stability issues for what i consider a stable cluster) but you can:<br>
> - create a replica 3 engine gluster volume<br>
> - create replica 2 data, iso and export volumes<br>
><br>
><br>
> What are the stability issues you're facing? Data volume if used as a data<br>
> storage domain should be a replica 3 volume as well.<br>
><br>
><br>
><br>
> Deploy the hosted-engine on the first VM (with the engine volume) froom the<br>
> CLI, then log in Ovirt admin, enable gluster support, install *and deploy*<br>
> from the GUI host2 and host3 (where the engine bricks are) and then install<br>
> host4 without deploying. This should get you the 4 hosts online, but the<br>
> engine will run only on the first 3<br>
><br>
><br>
> Right. You can add the 4th node to the cluster, but not have any bricks on this<br>
> volume in which case VMs will be run on this node but will access data from the<br>
> other 3 nodes.<br>
><br>
><br>
><br>
> 2016-09-23 11:14 GMT+02:00 Goorkate, B.J. <<a href="mailto:b.j.goorkate@umcutrecht.nl" target="_blank">b.j.goorkate@umcutrecht.nl</a>>:<br>
><br>
> Dear all,<br>
><br>
> I've tried to find a way to add a 4th oVirt-node to my existing<br>
> 3-node setup with replica-3 gluster storage, but found no usable<br>
> solution yet.<br>
><br>
> From what I read, it's not wise to create a replica-4 gluster<br>
> storage, because of bandwith overhead.<br>
><br>
> Is there a safe way to do this and still have 4 equal oVirt nodes?<br>
><br>
> Thanks in advance!<br>
><br>
> Regards,<br>
><br>
> Bertjan<br>
><br>
> -----------------------------<wbr>------------------------------<wbr>-<br>
> ------------------<br>
><br>
> De informatie opgenomen in dit bericht kan vertrouwelijk zijn en is<br>
> uitsluitend bestemd voor de geadresseerde. Indien u dit bericht<br>
> onterecht<br>
> ontvangt, wordt u verzocht de inhoud niet te gebruiken en de afzender<br>
> direct<br>
> te informeren door het bericht te retourneren. Het Universitair Medisch<br>
> Centrum Utrecht is een publiekrechtelijke rechtspersoon in de zin van<br>
> de W.H.W.<br>
> (Wet Hoger Onderwijs en Wetenschappelijk Onderzoek) en staat<br>
> geregistreerd bij<br>
> de Kamer van Koophandel voor Midden-Nederland onder nr. 30244197.<br>
><br>
> Denk s.v.p aan het milieu voor u deze e-mail afdrukt.<br>
><br>
> -----------------------------<wbr>------------------------------<wbr>-<br>
> ------------------<br>
><br>
> This message may contain confidential information and is intended<br>
> exclusively<br>
> for the addressee. If you receive this message unintentionally, please<br>
> do not<br>
> use the contents but notify the sender immediately by return e-mail.<br>
> University<br>
> Medical Center Utrecht is a legal person by public law and is<br>
> registered at<br>
> the Chamber of Commerce for Midden-Nederland under no. 30244197.<br>
><br>
> Please consider the environment before printing this e-mail.<br>
> _____________________________<wbr>__________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailma<wbr>n/listinfo/users</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Davide Ferrari<br>
> Senior Systems Engineer<br>
><br>
> _____________________________<wbr>__________________<br>
> Users mailing list<br>
> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailma<wbr>n/listinfo/users</a><br>
><br>
><br>
><br>
</div></div></blockquote></div><br></div></div>