<div dir="ltr"><div><div><div><div>Hi Andrew,<br><br></div>Yes..both on the same node...but i have 4 nodes of this type in the same cluster....So it should work or not ?? <br><br></div>1. 4 physical nodes with 12 bricks each(distributed replicated)...<br>
</div>2. The same all 4 nodes use for the compute purpose also...<br><br></div>Do i still require the VIP...or not ?? because i tested even the mount point node goes down...the VM will not pause and not affect...<br></div>
<div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Jul 4, 2014 at 1:18 PM, Andrew Lau <span dir="ltr"><<a href="mailto:andrew@andrewklau.com" target="_blank">andrew@andrewklau.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Or just localhost as your computer and storage are on the same box.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Fri, Jul 4, 2014 at 2:48 PM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
> Hi Andrew,<br>
><br>
> Thanks for the update....that means HA can not work without VIP in the<br>
> gluster,so better to use the glusterfs with the VIP to take over the ip...in<br>
> case of any storage node failure...<br>
><br>
><br>
> On Fri, Jul 4, 2014 at 12:35 PM, Andrew Lau <<a href="mailto:andrew@andrewklau.com">andrew@andrewklau.com</a>> wrote:<br>
>><br>
>> Don't forget to take into consideration quroum, that's something<br>
>> people often forget<br>
>><br>
>> The reason you're having the current happen, is gluster only uses the<br>
>> initial IP address to get the volume details. After that it'll connect<br>
>> directly to ONE of the servers, so with your 2 storage server case,<br>
>> 50% chance it won't go to paused state.<br>
>><br>
>> For the VIP, you could consider CTDB or keepelived, or even just using<br>
>> localhost (as your storage and compute are all on the same machine).<br>
>> For CTDB, checkout<br>
>> <a href="http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/" target="_blank">http://community.redhat.com/blog/2014/05/ovirt-3-4-glusterized/</a><br>
>><br>
>> I have a BZ open regarding gluster VMs going into paused state and not<br>
>> being resumable, so it's something you should also consider. My case,<br>
>> switch dies, gluster volume goes away, VMs go into paused state but<br>
>> can't be resumed. If you lose one server out of a cluster is a<br>
>> different story though.<br>
>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1058300" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1058300</a><br>
>><br>
>> HTH<br>
>><br>
>> On Fri, Jul 4, 2014 at 11:48 AM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
>> > Hi,<br>
>> ><br>
>> > Thanks...can you suggest me any good how to/article for the glusterfs<br>
>> > with<br>
>> > ovirt...<br>
>> ><br>
>> > One strange thing is if i will try both (compute & storage) on the same<br>
>> > node...the below quote not happen....<br>
>> ><br>
>> > ---------------------<br>
>> ><br>
>> > Right now, if 10.10.10.2 goes away, all your gluster mounts go away and<br>
>> > your<br>
>> > VMs get paused because the hypervisors can’t access the storage. Your<br>
>> > gluster storage is still fine, but ovirt can’t talk to it because<br>
>> > 10.10.10.2<br>
>> > isn’t there.<br>
>> > ---------------------<br>
>> ><br>
>> > Even the 10.10.10.2 goes down...i can still access the gluster mounts<br>
>> > and no<br>
>> > VM pause....i can access the VM via ssh...no connection failure.....the<br>
>> > connection drop only in case of SPM goes down and the another node will<br>
>> > elect as SPM(All the running VM's pause in this condition).<br>
>> ><br>
>> ><br>
>> ><br>
>> > On Fri, Jul 4, 2014 at 4:12 AM, Darrell Budic<br>
>> > <<a href="mailto:darrell.budic@zenfire.com">darrell.budic@zenfire.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> You need to setup a virtual IP to use as the mount point, most people<br>
>> >> use<br>
>> >> keepalived to provide a virtual ip via vrrp for this. Setup something<br>
>> >> like<br>
>> >> 10.10.10.10 and use that for your mounts.<br>
>> >><br>
>> >> Right now, if 10.10.10.2 goes away, all your gluster mounts go away and<br>
>> >> your VMs get paused because the hypervisors can’t access the storage.<br>
>> >> Your<br>
>> >> gluster storage is still fine, but ovirt can’t talk to it because<br>
>> >> 10.10.10.2<br>
>> >> isn’t there.<br>
>> >><br>
>> >> If the SPM goes down, it the other hypervisor hosts will elect a new<br>
>> >> one<br>
>> >> (under control of the ovirt engine).<br>
>> >><br>
>> >> Same scenarios if storage & compute are on the same server, you still<br>
>> >> need<br>
>> >> a vip address for the storage portion to serve as the mount point so<br>
>> >> it’s<br>
>> >> not dependent on any one server.<br>
>> >><br>
>> >> -Darrell<br>
>> >><br>
>> >> On Jul 3, 2014, at 1:14 AM, Punit Dambiwal <<a href="mailto:hypunit@gmail.com">hypunit@gmail.com</a>> wrote:<br>
>> >><br>
>> >> Hi,<br>
>> >><br>
>> >> I have some HA related concern about glusterfs with Ovirt...let say i<br>
>> >> have<br>
>> >> 4 storage node with gluster bricks as below :-<br>
>> >><br>
>> >> 1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed<br>
>> >> replicated architecture...<br>
>> >> 2. Now attached this gluster storge to ovrit-engine with the following<br>
>> >> mount point <a href="http://10.10.10.2/vol1" target="_blank">10.10.10.2/vol1</a><br>
>> >> 3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7)<br>
>> >> SPM<br>
>> >> is on 10.10.10.5...<br>
>> >> 4. What happen if 10.10.10.2 will goes down.....can hypervisior host<br>
>> >> can<br>
>> >> still access the storage ??<br>
>> >> 5. What happen if SPM goes down ???<br>
>> >><br>
>> >> Note :- What happen for point 4 &5 ,If storage and Compute both working<br>
>> >> on<br>
>> >> the same server.<br>
>> >><br>
>> >> Thanks,<br>
>> >> Punit<br>
>> >> _______________________________________________<br>
>> >> Users mailing list<br>
>> >> <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> >> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>> >><br>
>> >><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > Users mailing list<br>
>> > <a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
>> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
>> ><br>
><br>
><br>
</div></div></blockquote></div><br></div>