--Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi,
oh, so you have only 2 physical servers? I've understood they were 3! Well, i=
n this case ceph would not work very well, too few resources and redundancy.=
You could try a replica 2, but it's not safe. Having a replica 3 could be f=
orced, but you would end up with a server with 2 replicas, which is dangerou=
s/useless.
Okay, so you use nfs as storage domain, but in your setup the HA is not guar=
anteed: if a physical machine goes down and it's the one where the storage d=
omain resides you are lost. Why not using gluster instead of nfs for the ovi=
rt disks? You can still reserve a small gluster space for the non-ceph machi=
nes (for example a cinder VM) and ceph for the rest. Where do you have your c=
inder running?
Cheers,
Alessandro
Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel
<rajatjpatel(a)gmail.com>=
ha scritto:
=20
Hi Alessandro,
=20
Right now I have 2 physical server where I have host ovirt these are HP pr=
oliant
dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. So=
right now I have use only one disk which 500GB of SAS for my ovirt to run o=
n both server. rest are not in use. At present I am using NFS which coming f=
rom mapper to ovirt as storage, go forward we like to use all these disk as =
hyper-converged for ovirt. RH I could see there is KB for using gluster. Bu=
t we are looking for Ceph bcoz best pref romance and scale.
=20
<Screenshot from 2016-12-18 21-03-21.png>
Regards
Rajat
=20
Hi
=20
=20
Regards,
Rajat Patel
=20
http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...
=20
=20
> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <Alessandro.DeSalvo@=
roma1.infn.it> wrote:
> Hi Rajat,
> sorry but I do not really have a clear picture of your actual setup, can y=
ou
please explain a bit more?
> In particular:
>=20
> 1) what to you mean by using 4TB for ovirt? In which machines and how do y=
ou
make it available to ovirt?
>=20
> 2) how do you plan to use ceph with ovirt?
>=20
> I guess we can give more help if you clarify those points.
> Thanks,
>=20
> Alessandro=20
>=20
>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <rajatjpatel(a)gmail.co=
m> ha scritto:
>>=20
>> Great, thanks! Alessandro ++ Yaniv ++=20
>>=20
>> What I want to use around 4 TB of SAS disk for my Ovirt (which going to b=
e RHV4.0.5 once POC get 100% successful, in fact all product will be RH )
>>=20
>> I had done so much duckduckgo for all these solution and use lot of refe=
rence from
ovirt.org &
access.redhat.com for setting up a Ovirt engine and h=
yp.
>>=20
>> We dont mind having more guest running and creating ceph block storage a=
nd which will be presented to ovirt as storage. Gluster is not is use right n=
ow bcoz we have DB will be running on guest.
>>=20
>> Regard
>> Rajat=20
>>=20
>>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo <Alessandro.DeSalvo=
@roma1.infn.it> wrote:
>>> Hi,
>>> having a 3-node ceph cluster is the bare minimum you can have to make i=
t working, unless you want to have just a replica-2 mode, which is not safe.=
>>> It's not true that ceph is not easy to configure, you
might use very ea=
sily ceph-deploy, have puppet configuring it or even run it in
containers. U=
sing docker is in fact the easiest solution, it really requires 10 minutes t=
o make a cluster up. I've tried it both with jewel (official containers) and=
kraken (custom containers), and it works pretty well.
>>> The real problem is not creating and configuring a ceph
cluster, but us=
ing it from ovirt, as it requires cinder, i.e. a minimal setup of
openstack.=
We have it and it's working pretty well, but it requires some work. For you=
r reference we have cinder running on an ovirt VM using gluster.
>>> Cheers,
>>>=20
>>> Alessandro=20
>>>=20
>>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul
<ykaul(a)redhat.com> h=
a scritto:
>>>>=20
>>>>=20
>>>>=20
>>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel
<rajatjpatel(a)gmail.com> w=
rote:
>>>> =E2=80=8BDear Team,
>>>>=20
>>>> We are using Ovirt 4.0 for POC what we are doing I want to check with a=
ll Guru's Ovirt.
>>>>=20
>>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk and
500G=
B SSD.
>>>>=20
>>>> Waht we are done we have install ovirt hyp on these h/w and we have ph=
ysical server where we are running our manager for ovirt. For ovirt hyp we a=
re using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n=
ode as guest running on ovirt and for ceph. My question you all is what I am=
doing is right or wrong.
>>>>=20
>>>> I think Ceph requires a lot more resources than above. It's also a
bit=
more challenging to configure. I would highly recommend a 3-node cluster wi=
th Gluster.
>>>> Y.
>>>> =20
>>>>=20
>>>> Regards
>>>> Rajat=E2=80=8B
>>>>=20
>>>>=20
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>=20
>>>>=20
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>=20
>> --=20
>> Sent from my Cell Phone - excuse the typos & auto incorrect
>>=20
=20
--Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type"
content=3D"text/html; charset=3D=
utf-8"></head><body
dir=3D"auto"><div></div><div>Hi,</div><div>oh,
so you ha=
ve only 2 physical servers? I've understood they were 3! Well, in this case c=
eph would not work very well, too few resources and redundancy. You could tr=
y a replica 2, but it's not safe. Having a replica 3 could be forced, but yo=
u would end up with a server with 2 replicas, which is dangerous/useless.</d=
iv><div>Okay, so you use nfs as storage domain, but in your setup the HA is n=
ot guaranteed: if a physical machine goes down and it's the one where the st=
orage domain resides you are lost. Why not using gluster instead of nfs for t=
he ovirt disks? You can still reserve a small gluster space for the non-ceph=
machines (for example a cinder VM) and ceph for the rest. Where do you have=
your cinder
running?</div><div>Cheers,</div><div><br></div><div>
&nbs=
p; Alessandro</div><div><br>Il giorno 18 dic 2016, alle ore 18:05,
rajatjpat=
el <<a
href=3D"mailto:rajatjpatel@gmail.com">rajatjpatel@gmail.com</a>>=
; ha scritto:<br><br></div><blockquote
type=3D"cite"><div><div dir=3D"ltr"><=
div class=3D"gmail_default" style=3D"font-family:comic sans
ms,sans-serif;fo=
nt-size:large;color:rgb(0,0,255)">Hi
Alessandro,<br><br></div><div class=3D"=
gmail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:large=
;color:rgb(0,0,255)">Right now I have 2 physical server where I have host ov=
irt these are HP proliant dl 380 each server 1*500GB SAS & 1TB *4 S=
AS Disk and 1*500GB SSD. So right now I have use only one disk which 500GB o=
f SAS for my ovirt to run on both server. rest are not in use. At present I a=
m using NFS which coming from mapper to ovirt as storage, go forward we like=
to use all these disk as hyper-converged for ovirt. RH I could see th=
ere is KB for using gluster. But we are looking for Ceph bcoz best pref roma=
nce and scale.<br><br><Screenshot from 2016-12-18
21-03-21.png><br></d=
iv><div class=3D"gmail_default" style=3D"font-family:comic sans
ms,sans-seri=
f;font-size:large;color:rgb(0,0,255)">Regards<br></div><div
class=3D"gmail_d=
efault" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:=
rgb(0,0,255)">Rajat<br></div></div><div
class=3D"gmail_extra"><br clear=3D"a=
ll"><div><div class=3D"gmail_signature"
data-smartmail=3D"gmail_signature"><=
div dir=3D"ltr"><div><font face=3D"tahoma, sans-serif"
size=3D"4" style=3D"b=
ackground-color:rgb(243,243,243)"
color=3D"#0000ff">Hi</font></div><font fac=
e=3D"tahoma, sans-serif" size=3D"4"
style=3D"background-color:rgb(243,243,24=
3)" color=3D"#0000ff"><div><font face=3D"tahoma,
sans-serif" size=3D"4" styl=
e=3D"background-color:rgb(243,243,243)"
color=3D"#0000ff"><br></font></div><=
div><font face=3D"tahoma, sans-serif" size=3D"4"
style=3D"background-color:r=
gb(243,243,243)"
color=3D"#0000ff"><br></font></div>Regards,<br>Rajat
Patel<=
br><br><a
href=3D"http://studyhat.blogspot.com/"
target=3D"_blank">http://st=
udyhat.blogspot.com</a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH
AT YO=
U...<br>THEN THEY FIGHT YOU...<br>THEN YOU
WIN...</font><br><br></div></div>=
</div>
<br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 8:49 PM,
Alessandro D=
e Salvo <span dir=3D"ltr"><<a
href=3D"mailto:Alessandro.DeSalvo@roma1.inf=
n.it"
target=3D"_blank">Alessandro.DeSalvo(a)roma1.infn.it</a>&gt;</span>
wrot=
e:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-le=
ft:1px #ccc solid;padding-left:1ex"><div
dir=3D"auto"><div></div><div>Hi Raj=
at,</div><div>sorry but I do not really have a clear picture of your actual
s=
etup, can you please explain a bit more?</div><div>In
particular:</div><div>=
<br></div><div>1) what to you mean by using 4TB for ovirt? In which
machines=
and how do you make it available to
ovirt?</div><div><br></div><div>2) how d=
o you plan to use ceph with
ovirt?</div><div><br></div><div>I guess we can g=
ive more help if you clarify those
points.</div><div>Thanks,</div><div><br><=
/div><div> Alessandro </div><div><div
class=3D"h5"><div><br=
Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel <<a
href=3D"mailto:ra=
jatjpatel(a)gmail.com"
target=3D"_blank">rajatjpatel(a)gmail.com</a>&gt; ha scri=
tto:<br><br></div><blockquote
type=3D"cite"><div><div
dir=3D"ltr"><div><div>=
<div><div><div>Great, thanks! Alessandro ++ Yaniv ++
<br><br></div>What I wa=
nt to use around 4 TB of SAS disk for my Ovirt (which going to be RHV4.0.5 o=
nce POC get 100% successful, in fact all product will be RH
)<br><br></div>I=
had done so much duckduckgo for all these solution and use lot of reference=
from <a href=3D"http://ovirt.org"
target=3D"_blank">ovirt.org</a> & <a h=
ref=3D"http://access.redhat.com"
target=3D"_blank">access.redhat.com</a> for=
setting up a Ovirt engine and hyp.<br><br></div>We dont mind having
more gu=
est running and creating ceph block storage and which will be presented to o=
virt as storage. Gluster is not is use right now bcoz we have DB will be run=
ning on guest.<br><br></div>Regard<br></div>Rajat
<br></div><br><div class=3D=
"gmail_quote"><div dir=3D"ltr">On Sun, Dec 18, 2016 at 8:21
PM Alessandro De=
Salvo <<a href=3D"mailto:Alessandro.DeSalvo@roma1.infn.it"
target=3D"_bl=
ank">Alessandro.DeSalvo(a)roma1.<wbr>infn.it</a>&gt;
wrote:<br></div><blockquo=
te class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc
sol=
id;padding-left:1ex"><div dir=3D"auto"
class=3D"m_324330166984056793gmail_ms=
g"><div
class=3D"m_324330166984056793gmail_msg"></div><div
class=3D"m_324330=
166984056793gmail_msg">Hi,</div><div
class=3D"m_324330166984056793gmail_msg"=
having a 3-node ceph cluster is the bare minimum you can have to make
it wo=
rking, unless you want to have just a replica-2 mode, which is not
safe.</di=
v><div class=3D"m_324330166984056793gmail_msg">It's not true that
ceph is no=
t easy to configure, you might use very easily ceph-deploy, have puppet conf=
iguring it or even run it in containers. Using docker is in fact the easiest=
solution, it really requires 10 minutes to make a cluster up. I've tried it=
both with jewel (official containers) and kraken (custom containers), and i=
t works pretty well.</div><div
class=3D"m_324330166984056793gmail_msg">The r=
eal problem is not creating and configuring a ceph cluster, but using it fro=
m ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have i=
t and it's working pretty well, but it requires some work. For your referenc=
e we have cinder running on an ovirt VM using gluster.</div><div
class=3D"m_=
324330166984056793gmail_msg">Cheers,</div><div
class=3D"m_324330166984056793=
gmail_msg"><br
class=3D"m_324330166984056793gmail_msg"></div><div class=3D"m=
_324330166984056793gmail_msg">
Alessandro </div></div><div d=
ir=3D"auto" class=3D"m_324330166984056793gmail_msg"><div
class=3D"m_32433016=
6984056793gmail_msg"><br
class=3D"m_324330166984056793gmail_msg">Il giorno 1=
8 dic 2016, alle ore 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.co=
m" class=3D"m_324330166984056793gmail_msg"
target=3D"_blank">ykaul(a)redhat.co=
m</a>> ha scritto:<br
class=3D"m_324330166984056793gmail_msg"><br class=3D=
"m_324330166984056793gmail_msg"></div><blockquote
type=3D"cite" class=3D"m_3=
24330166984056793gmail_msg"><div
class=3D"m_324330166984056793gmail_msg"><di=
v dir=3D"ltr" class=3D"m_324330166984056793gmail_msg"><br
class=3D"m_3243301=
66984056793gmail_msg"><div class=3D"gmail_extra
m_324330166984056793gmail_ms=
g"><br class=3D"m_324330166984056793gmail_msg"><div
class=3D"gmail_quote m_3=
24330166984056793gmail_msg">On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <sp=
an dir=3D"ltr"
class=3D"m_324330166984056793gmail_msg"><<a href=3D"mailto=
:rajatjpatel@gmail.com" class=3D"m_324330166984056793gmail_msg"
target=3D"_b=
lank">rajatjpatel(a)gmail.com</a>&gt;</span> wrote:<br
class=3D"m_324330166984=
056793gmail_msg"><blockquote class=3D"gmail_quote
m_324330166984056793gmail_=
msg" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex"=
<div dir=3D"ltr"
class=3D"m_324330166984056793gmail_msg"><div class=3D"m_32=
4330166984056793m_-4293604042961126787m_-8155750194716306479gmail_signature m=
_324330166984056793gmail_msg"><div dir=3D"ltr"
class=3D"m_324330166984056793=
gmail_msg"><div
class=3D"m_324330166984056793gmail_msg">=E2=80=8BDear Team,<=
br class=3D"m_324330166984056793gmail_msg"><br
class=3D"m_324330166984056793=
gmail_msg">We are using Ovirt 4.0 for POC what we are doing I want to check w=
ith all Guru's Ovirt.<br
class=3D"m_324330166984056793gmail_msg"><br class=3D=
"m_324330166984056793gmail_msg">We have 2 hp proliant dl 380 with 500GB SAS
&=
amp; 1TB *4 SAS Disk and 500GB SSD.<br class=3D"m_324330166984056793gmail_ms=
g"><br class=3D"m_324330166984056793gmail_msg">Waht we are done we
have inst=
all ovirt hyp on these h/w and we have physical server where we are running o=
ur manager for ovirt. For ovirt hyp we are using only one 500GB of one HDD r=
est we have kept for ceph, so we have 3 node as guest running on ovirt and f=
or ceph. My question you all is what I am doing is right or wrong.<br class=3D=
"m_324330166984056793gmail_msg"></div></div></div></div></blockquote><div
cl=
ass=3D"m_324330166984056793gmail_msg"><br
class=3D"m_324330166984056793gmail=
_msg"></div><div class=3D"m_324330166984056793gmail_msg">I
think Ceph requir=
es a lot more resources than above. It's also a bit more challenging to conf=
igure. I would highly recommend a 3-node cluster with Gluster.</div><div cla=
ss=3D"m_324330166984056793gmail_msg">Y.</div><div
class=3D"m_324330166984056=
793gmail_msg"> </div><blockquote class=3D"gmail_quote
m_324330166984056=
793gmail_msg" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex"><div dir=3D"ltr"
class=3D"m_324330166984056793gmail_msg"><div clas=
s=3D"m_324330166984056793m_-4293604042961126787m_-8155750194716306479gmail_s=
ignature m_324330166984056793gmail_msg"><div dir=3D"ltr"
class=3D"m_32433016=
6984056793gmail_msg"><div
class=3D"m_324330166984056793gmail_msg"><br class=3D=
"m_324330166984056793gmail_msg"></div><div
class=3D"m_324330166984056793gmai=
l_msg">Regards<br
class=3D"m_324330166984056793gmail_msg"></div><div class=3D=
"m_324330166984056793gmail_msg">Rajat=E2=80=8B</div><br
class=3D"m_324330166=
984056793gmail_msg"></div></div>
</div>
<br
class=3D"m_324330166984056793gmail_msg">______________________________<w=
br>_________________<br class=3D"m_324330166984056793gmail_msg">
Users mailing list<br class=3D"m_324330166984056793gmail_msg">
<a href=3D"mailto:Users@ovirt.org"
class=3D"m_324330166984056793gmail_msg" t=
arget=3D"_blank">Users(a)ovirt.org</a><br
class=3D"m_324330166984056793gmail_m=
sg">
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer"=
class=3D"m_324330166984056793gmail_msg"
target=3D"_blank">http://lists.ovir=
t.org/<wbr>mailman/listinfo/users</a><br
class=3D"m_324330166984056793gmail_=
msg">
<br
class=3D"m_324330166984056793gmail_msg"></blockquote></div><br
class=3D"=
m_324330166984056793gmail_msg"></div></div>
</div></blockquote><blockquote type=3D"cite"
class=3D"m_324330166984056793gm=
ail_msg"><div class=3D"m_324330166984056793gmail_msg"><span
class=3D"m_32433=
0166984056793gmail_msg">______________________________<wbr>_________________=
</span><br class=3D"m_324330166984056793gmail_msg"><span
class=3D"m_32433016=
6984056793gmail_msg">Users mailing list</span><br
class=3D"m_324330166984056=
793gmail_msg"><span class=3D"m_324330166984056793gmail_msg"><a
href=3D"mailt=
o:Users@ovirt.org" class=3D"m_324330166984056793gmail_msg"
target=3D"_blank"=
Users(a)ovirt.org</a></span><br
class=3D"m_324330166984056793gmail_msg"><span=
class=3D"m_324330166984056793gmail_msg"><a
href=3D"http://lists.ovirt.org/m=
ailman/listinfo/users" class=3D"m_324330166984056793gmail_msg"
target=3D"_bl=
ank">http://lists.ovirt.org/<wbr>mailman/listinfo/users<...
class=3D=
"m_324330166984056793gmail_msg"></div></blockquote></div></blockquote></div>=
<div dir=3D"ltr">-- <br></div><div
data-smartmail=3D"gmail_signature"><p dir=
=3D"ltr">Sent from my Cell Phone - excuse the typos & auto
incorrect</p>=
</div>
</div></blockquote></div></div></div></blockquote></div><br></div>
</div></blockquote></body></html>=
--Apple-Mail-F8806A8F-B8B6-48D9-BC6C-8A2672E65957--