--Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi Yaniv,
Il giorno 18 dic 2016, alle ore 17:37, Yaniv Kaul
<ykaul(a)redhat.com> ha sc=
ritto:
=20
=20
=20
> On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo <Alessandro.DeSalvo@=
roma1.infn.it> wrote:
> Hi,
> having a 3-node ceph cluster is the bare minimum you can have to make it w=
orking, unless you want to have just a replica-2 mode, which is not safe.
=20
How well does it perform?
One if the ceph clusters we use had exactly this setup: 3 DELL R630 (ceph je=
wel), 6 1TB NL-SAS disks so 3 mons, 6 osds. We bound the cluster network to a=
dedicated interface, 1Gbps. I can say it works pretty well, the performance=
reaches up to 100MB/s per rbd device, which is the expected maximum for the=
network connection. Resiliency is also pretty good, we can loose 2 osds (I.=
e. a full machine) without impacting on the performance.
=20
> It's not true that ceph is not easy to configure, you might use very easi=
ly ceph-deploy, have puppet configuring it or even run it in containers. Usi=
ng docker is in fact the easiest solution, it really requires 10 minutes to m=
ake a cluster up. I've tried it both with jewel (official containers) and kr=
aken (custom containers), and it works pretty well.
=20
This could be a great blog post in
ovirt.org site - care to write somethin=
g
describing the configuration and setup?
Oh sure, if it may be of general interest I'll be glad to. How can I do it? :=
-)
Cheers,
Alessandro=20
Y.
=20
> The real problem is not creating and configuring a ceph cluster, but usin=
g it
from ovirt, as it requires cinder, i.e. a minimal setup of openstack. W=
e have it and it's working pretty well, but it requires some work. For your r=
eference we have cinder running on an ovirt VM using gluster.
> Cheers,
>=20
> Alessandro=20
>=20
>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul <ykaul(a)redhat.com> ha s=
critto:
>>=20
>>=20
>>=20
>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <rajatjpatel(a)gmail.com>
wr=
ote:
>>> =E2=80=8BDear Team,
>>>=20
>>> We are using Ovirt 4.0 for POC what we are doing I want to check with a=
ll Guru's Ovirt.
>>>=20
>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB=
SSD.
>>>=20
>>> Waht we are done we have install ovirt hyp on these h/w and we have phy=
sical server where we are running our manager for ovirt. For ovirt hyp we ar=
e using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n=
ode as guest running on ovirt and for ceph. My question you all is what I am=
doing is right or wrong.
>>=20
>> I think Ceph requires a lot more resources than above. It's also a bit m=
ore challenging to configure. I would highly recommend a 3-node cluster with=
Gluster.
>> Y.
>> =20
>>>=20
>>> Regards
>>> Rajat=E2=80=8B
>>>=20
>>>=20
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>=20
>>=20
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>>
http://lists.ovirt.org/mailman/listinfo/users
=20
--Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type"
content=3D"text/html; charset=3D=
utf-8"></head><body
dir=3D"auto"><div></div><div>Hi
Yaniv,</div><div><br>Il g=
iorno 18 dic 2016, alle ore 17:37, Yaniv Kaul <<a href=3D"mailto:ykaul@re=
dhat.com">ykaul(a)redhat.com</a>&gt; ha
scritto:<br><br></div><blockquote type=
=3D"cite"><div><div dir=3D"ltr"><br><div
class=3D"gmail_extra"><br><div clas=
s=3D"gmail_quote">On Sun, Dec 18, 2016 at 6:21 PM, Alessandro De Salvo
<span=
dir=3D"ltr"><<a
href=3D"mailto:Alessandro.DeSalvo@roma1.infn.it" target=3D=
"_blank">Alessandro.DeSalvo(a)roma1.infn.it</a>&gt;</span>
wrote:<br><blockquo=
te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc
sol=
id;padding-left:1ex"><div
dir=3D"auto"><div></div><div>Hi,</div><div>having
a=
3-node ceph cluster is the bare minimum you can have to make it working, un=
less you want to have just a replica-2 mode, which is not
safe.</div></div><=
/blockquote><div><br></div><div>How well does it
perform?</div></div></div><=
/div></div></blockquote><div><br></div><div>One if
the ceph clusters we use h=
ad exactly this setup: 3 DELL R630 (ceph jewel), 6 1TB NL-SAS disks so 3 mon=
s, 6 osds. We bound the cluster network to a dedicated interface, 1Gbps. I c=
an say it works pretty well, the performance reaches up to 100MB/s per rbd d=
evice, which is the expected maximum for the network connection. Resiliency i=
s also pretty good, we can loose 2 osds (I.e. a full machine) without impact=
ing on the performance.</div><br><blockquote
type=3D"cite"><div><div dir=3D"=
ltr"><div class=3D"gmail_extra"><div
class=3D"gmail_quote"><div> </div>=
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-left:1px=
#ccc solid;padding-left:1ex"><div
dir=3D"auto"><div>It's not true that ceph=
is not easy to configure, you might use very easily ceph-deploy, have puppe=
t configuring it or even run it in containers. Using docker is in fact the e=
asiest solution, it really requires 10 minutes to make a cluster up. I've tr=
ied it both with jewel (official containers) and kraken (custom containers),=
and it works pretty
well.</div></div></blockquote><div><br></div><div>This
c=
ould be a great blog post in <a
href=3D"http://ovirt.org">ovirt.org</a> site=
- care to write something describing the configuration and setup?</div></di=
v></div></div></div></blockquote><div><br></div><div>Oh
sure, if it may be o=
f general interest I'll be glad to. How can I do it?
:-)</div><div>Cheers,</=
div><div><br></div><div>
Alessandro </div><br><blockquote t=
ype=3D"cite"><div><div dir=3D"ltr"><div
class=3D"gmail_extra"><div class=3D"=
gmail_quote"><div>Y.</div><div> </div><blockquote
class=3D"gmail_quote"=
style=3D"margin:0 0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex"><di=
v dir=3D"auto"><div>The real problem is not creating and configuring a
ceph c=
luster, but using it from ovirt, as it requires cinder, i.e. a minimal setup=
of openstack. We have it and it's working pretty well, but it requires some=
work. For your reference we have cinder running on an ovirt VM using gluste=
r.</div><div>Cheers,</div><div><br></div><div>
Alessandro <=
/div><div><div class=3D"h5"><div><br>Il giorno 18 dic
2016, alle ore 17:07, Y=
aniv Kaul <<a href=3D"mailto:ykaul@redhat.com"
target=3D"_blank">ykaul@re=
dhat.com</a>> ha scritto:<br><br></div><blockquote
type=3D"cite"><div><di=
v dir=3D"ltr"><br><div
class=3D"gmail_extra"><br><div class=3D"gmail_quote">=
On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <span
dir=3D"ltr"><<a href=3D=
"mailto:rajatjpatel@gmail.com"
target=3D"_blank">rajatjpatel(a)gmail.com</a>&g=
t;</span> wrote:<br><blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 .=
8ex;border-left:1px #ccc solid;padding-left:1ex"><div
dir=3D"ltr"><div class=
=3D"m_8035446879836480739m_-8155750194716306479gmail_signature"><div
dir=3D"=
ltr"><div>=E2=80=8BDear Team,<br><br>We are using Ovirt 4.0 for
POC what we a=
re doing I want to check with all Guru's Ovirt.<br><br>We have 2 hp
proliant=
dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.<br><br>Waht we
a=
re done we have install ovirt hyp on these h/w and we have physical server w=
here we are running our manager for ovirt. For ovirt hyp we are using only o=
ne 500GB of one HDD rest we have kept for ceph, so we have 3 node as guest r=
unning on ovirt and for ceph. My question you all is what I am doing is righ=
t or
wrong.<br></div></div></div></div></blockquote><div><br></div><div>I
th=
ink Ceph requires a lot more resources than above. It's also a bit more chal=
lenging to configure. I would highly recommend a 3-node cluster with Gluster=
.</div><div>Y.</div><div> </div><blockquote
class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div
dir=3D=
"ltr"><div
class=3D"m_8035446879836480739m_-8155750194716306479gmail_signatu=
re"><div
dir=3D"ltr"><div><br></div><div>Regards<br></div><div>Rajat=E2=80=8B=
</div><br></div></div
</div
<br>______________________________<wbr>_________________<br
Users mailing list<br
<a
href=3D"mailto:Users@ovirt.org"
target=3D"_blank">Users(a)ovirt.org</a><br>=
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer"=
target=3D"_blank">http://lists.ovirt.org/mailman<wbr>/...
<br></blockquote></div><br></div></div
</div></blockquote><blockquote
type=3D"cite"><div><span>____________________=
__________<wbr>_________________</span><br><span>Users mailing
list</span><b=
r><span><a href=3D"mailto:Users@ovirt.org"
target=3D"_blank">Users(a)ovirt.org=
</a></span><br><span><a
href=3D"http://lists.ovirt.org/mailman/listinfo/user=
s"
target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/...
/span><br></div></blockquote></div></div></div></blockquote></div><br></div>=
</div
</div></blockquote></body></html>=
--Apple-Mail-3A6654D0-314E-489D-8AFD-B591A0E708E2--