--_000_VI1P190MB02858CCC4D7DCD6A86090CC9C8B50VI1P190MB0285EURP_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi Colin,
I am in Portugal, so sorry for this late response. It is quite confusing fo=
r me, please consider:
1 - What if the RAID is done by the server's disk controller, not by softwa=
re?
2 - For JBOD I am just using gdeploy to deploy it. However, I am not using =
the oVirt node GUI to do this.
3 - As the VM .qcow2 files are quite big, tiering would only help if made b=
y an intelligent system that uses SSD for chunks of data not for the entire=
.qcow2 file. But I guess this is a problem everybody else has. So, Do you =
know how tiering works in Gluster?
4 - I am putting the OS on the first disk. However, would you do differentl=
y?
Moacir
________________________________
From: Colin Coe <colin.coe(a)gmail.com
Sent:
Monday, August 7, 2017 4:48 AM
To: Moacir Ferreira
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] Good practices
1) RAID5 may be a performance hit-
2) I'd be inclined to do this as JBOD by creating a distributed disperse vo=
lume on each server. Something like
echo gluster volume create dispersevol disperse-data 5 redundancy 2 \
$(for SERVER in a b c; do for BRICK in $(seq 1 5); do echo -e "server${SERV=
ER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)
3) I think the above.
4) Gluster does support tiering, but IIRC you'd need the same number of SSD=
as spindle drives. There may be another way to use the SSD as a fast cach=
e.
Where are you putting the OS?
Hope I understood the question...
Thanks
On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira <moacirferreira(a)hotmail.co=
m<mailto:moacirferreira@hotmail.com>> wrote:
I am willing to assemble a oVirt "pod", made of 3 servers, each with 2 CPU =
sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The idea is to use Gluste=
rFS to provide HA for the VMs. The 3 servers have a dual 40Gb NIC and a dua=
l 10Gb NIC. So my intention is to create a loop like a server triangle usin=
g the 40Gb NICs for virtualization files (VMs .qcow2) access and to move VM=
s around the pod (east /west traffic) while using the 10Gb interfaces for g=
iving services to the outside world (north/south traffic).
This said, my first question is: How should I deploy GlusterFS in such oVir=
t scenario? My questions are:
1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and then=
create a GlusterFS using them?
2 - Instead, should I create a JBOD array made of all server's disks?
3 - What is the best Gluster configuration to provide for HA while not cons=
uming too much disk space?
4 - Does a oVirt hypervisor pod like I am planning to build, and the virtua=
lization environment, benefits from tiering when using a SSD disk? And yes,=
will Gluster do it by default or I have to configure it to do so?
At the bottom line, what is the good practice for using GlusterFS in small =
pods for enterprises?
You opinion/feedback will be really appreciated!
Moacir
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--_000_VI1P190MB02858CCC4D7DCD6A86090CC9C8B50VI1P190MB0285EURP_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html
<head
<meta http-equiv=3D"Content-Type"
content=3D"text/html; charset=3Diso-8859-=
1"
<style type=3D"text/css"
style=3D"display:none;"><!-- P {margin-top:0;margi=
n-bottom:0;} --></style
</head
<body dir=3D"ltr"
<div
id=3D"divtagdefaultwrapper" style=3D"font-size:12pt;color:#000000;font=
-family:Calibri,Helvetica,sans-serif;" dir=3D"ltr"
<p>Hi Colin,</p
<p><br
</p
<p><span>I am in Portugal</span>, so sorry for this late response.
It is qu=
ite confusing for me, please consider:</p
<p></p
<div><b><br
</b>1<b> - </b>What if the RAID is done by
the server's disk controller, no=
t by software?</div
<br
<p></p
<p>2 -<b> </b>For
JBOD I am just using gdeploy to deploy it. However, I am =
not using the oVirt node GUI to do this.</p
<p><br
</p
<p>3 -<b> </b>As the VM .qcow2 files are quite big,
tiering would only=
help if made by an intelligent system that uses SSD for chunks of data not=
for the entire .qcow2 file. But I guess this is a problem everybody else h=
as. So, Do you know how tiering works
in Gluster?<br
</p
<p><br
</p
<p>4 - I am putting the OS on the first disk. However, would you do
di=
fferently?<br
</p
<p><br
</p
Moacir<br
<br
<div
style=3D"color: rgb(49, 55, 57);"
<hr
tabindex=3D"-1" style=3D"display:inline-block; width:98%"
<div id=3D"divRplyFwdMsg"
dir=3D"ltr"><font style=3D"font-size:11pt" face=
=3D"Calibri, sans-serif" color=3D"#000000"><b>From:</b>
Colin Coe <colin=
.coe(a)gmail.com&gt;<br
<b>Sent:</b> Monday,
August 7, 2017 4:48 AM<br
<b>To:</b> Moacir
Ferreira<br
<b>Cc:</b>
users(a)ovirt.org<br
<b>Subject:</b> Re:
[ovirt-users] Good practices</font
<div> </div
</div
<div
<div
dir=3D"ltr">1) RAID5 may be a performance hit- <br
<div><br
</div
<div>2) I'd be inclined to
do this as JBOD by creating a distributed disper=
se volume on each server. Something like
<div><br
</div
<div>echo gluster volume create dispersevol disperse-data
5 redundancy 2 \<=
/div
<div>$(for SERVER in a b c; do for BRICK in $(seq 1 5);
do echo -e "se=
rver${SERVER}:/brick/brick-${SERVER}${BRICK}/brick \c"; done; done)</d=
iv
<div><br
</div
<div>3) I think the above.
<b></b></div
<div><br
</div
<div>4) Gluster does support tiering, but IIRC you'd need the same number
o=
f SSD as spindle drives. There may be another way to use the SSD as a=
fast cache. </div
<div><br
</div
<div>Where are you putting the OS?</div
<div><br
</div
<div>Hope I understood the question...</div
<div><br
</div
<div>Thanks</div
</div
</div
<div
class=3D"gmail_extra"><br
<div
class=3D"gmail_quote">On Sun, Aug 6, 2017 at 10:49 PM, Moacir Ferreira=
<span dir=3D"ltr"
<<a
href=3D"mailto:moacirferreira@hotmail.com"
target=3D"_blank">moacirf=
erreira(a)hotmail.com</a>&gt;</span> wrote:<br
<blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 .8ex; border-left:1=
px #ccc solid; padding-left:1ex"
<div
dir=3D"ltr"
<div
id=3D"m_2460985691746498322divtagdefaultwrapper" dir=3D"ltr"
style=3D"=
font-size:12pt; color:#000000; font-family:Calibri,Helvetica,sans-serif"
<p><span>I am willing to assemble a oVirt
"pod", made of 3 server=
s, each with 2 CPU sockets of 12 cores, 256GB RAM, 7 HDD 10K, 1 SSD. The id=
ea is to use GlusterFS to provide HA for the VMs. The 3 servers have a dual=
40Gb NIC and a dual 10Gb NIC. So my intention
is to create a loop like a server triangle using the 40Gb NICs for virtual=
ization files (VMs .qcow2) access and to move VMs around the pod (east /wes=
t traffic) while using the 10Gb interfaces for giving services to the outsi=
de world (north/south traffic).</span></p
<p><br
<span></span></p
<p>This said, my first question is: How should I deploy
GlusterFS in such o=
Virt scenario? My questions are:</p
<p><br
</p
<p>1 - Should I create 3 RAID (i.e.: RAID 5), one on each oVirt node, and t=
hen create a GlusterFS using them?</p
<p>2 - Instead, should I create a JBOD array made of all server's
disks?</p=
<p>3 - What is the best Gluster configuration to provide
for HA while not c=
onsuming too much disk space?<br
</p
<p>4 - Does a oVirt hypervisor
pod like I am planning to build, and the vir=
tualization environment, benefits from tiering when using a SSD disk? And y=
es, will Gluster do it by default or I have to configure it to do so?</p
<p><br
</p
<p>At the bottom line, what is
the good practice for using GlusterFS in sma=
ll pods for enterprises?<br
</p
<p><br
</p
<p>You opinion/feedback will be really appreciated!</p
<span class=3D"HOEnZb"><font
color=3D"#888888"
<p>Moacir<br
</p
</font></span></div
</div
<br
______________________________<wbr>_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer=
"
target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/...
br
<br
</blockquote
</div
<br
</div
</div
</div
</div
</body
</html
--_000_VI1P190MB02858CCC4D7DCD6A86090CC9C8B50VI1P190MB0285EURP_--