--Alternative_=_Boundary_=_1409264035
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
I'll try to answer some of these.<br>1) its not a serious problem persay th=
e issue is if one node goes down and you delete a file while the second nod=
e is down it will be restored when the second node comes back which may cau=
se orphaned files where as if you use 3 servers they will use quorum to fig=
ure out what needs to be restored or deleted. Further more your read and wr=
ite performance may suffer especially in comparison to having 1 replica of =
the file with stripping.<br><br>2) see answer 1 and just create the volume =
with 1 replica and only include the URI for bricks on two of the hosts when=
you create it.<br><br>3) I think so but have never tried it you just have =
to define it as a local storage domain.<br><br>4) well that's a
philosophic=
al question. You can theory have two hosted engines on separate VMs on two =
separate physical boxes but if for any reason they both go down you will "b=
e living in interesting times" (as in the Chinese curse)<br><br><span
style=
=3D"font-family:Prelude, Verdana, san-serif;">5) YES! And have more than on=
e.<br><br></span><span id=3D"signature"><div
style=3D"font-family: arial, s=
ans-serif; font-size: 12px;color: #999999;">-- Sent from my HP
Pre3</div><b=
r></span><span style=3D"color:navy; font-family:Prelude, Verdana,
san-serif=
; "><hr align=3D"left" style=3D"width:75%">On Aug 28,
2014 9:39 AM, David K=
ing &lt;david(a)rexden.us&gt; wrote: <br><br></span><div
dir=3D"ltr">Hi,<div>=
<br></div><div>I am currently testing oVirt 3.4.3 + gluster 3.5.2 for
use i=
n my relatively small home office environment on a single host. =C2=A0I hav=
e 2 =C2=A0Intel hosts with SSD and magnetic disk and one AMD host with only=
magnetic disk. =C2=A0I have been trying to figure out the best way to conf=
igure my environment given my previous attempt with oVirt 3.3 encountered s=
torage issues.</div>
<div><br></div><div>I will be hosting two types of VMs - VMs that
can be ti=
ed to a particular system (such as 3 node FreeIPA domain or some test VMs),=
and VMs which could migrate between systems for improved uptime.</div>
<div><br></div><div>The processor issue seems straightforward.
=C2=A0Have a=
single datacenter with two clusters - one for the Intel systems and one fo=
r the AMD systems. =C2=A0Put VMs which need to live migrate on the Intel cl=
uster. =C2=A0If necessary VMs can be manually switched between the Intel an=
d AMD cluster with a downtime.</div>
<div><br></div><div>The Gluster side of the storage seems less
clear. =C2=
=A0The bulk of the gluster with oVirt issues I experienced and have seen on=
the list seem to be two node setups with 2 bricks in the Gluster volume. =
=C2=A0</div>
<div><br></div><div>So here are my
questions:</div><div><br></div><div>1) S=
hould I avoid 2 brick Gluster volumes?
=C2=A0</div><div><br></div><div>2) W=
hat is the risk in having the SSD volumes with only 2 bricks given that the=
re would be 3 gluster servers? =C2=A0How should I configure them?</div>
<div><br></div><div>3) Is there a way to use local storage for a
host locke=
d VM other than creating a gluster volume with one brick? =C2=A0</div><div>=
<br></div><div>4) Should I avoid using the hosted engine configuration?
=C2=
=A0I do have an external VMware ESXi system to host the engine for now but =
would like to phase it out eventually.</div>
<div><br></div><div>5) If I do the hosted engine should I make the
underlyi=
ng gluster volume 3 brick
replicated?</div><div><br></div><div>Thanks in ad=
vance for any help you can provide.
=C2=A0</div><div><br></div><div>-David<=
/div>
</div>
--Alternative_=_Boundary_=_1409264035--