--Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi Rajat,
OK, I see. Well, so just consider that ceph will not work at best in your se=
tup, unless you add at least a physical machine. Same is true for ovirt if y=
ou are only using native NFS, as you loose a real HA.
Having said this, of course you choose what's best for your site or affordab=
le, but your setup looks quite fragile to me. Happy to help more if you need=
.
Regards,
Alessandro
Il giorno 18 dic 2016, alle ore 18:22, rajatjpatel
<rajatjpatel(a)gmail.com>=
ha scritto:
=20
Alessandro,
=20
Right now I dont have cinder running in my setup in case if ceph don't wor=
k
then I have get one vm running open stack all in one and have all these di=
sk connect my open stack using cinder I can present storage to my ovirt.
=20
At the same time I not getting case study for the same.
=20
Regards
Rajat
=20
Hi
=20
=20
Regards,
Rajat Patel
=20
http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...
=20
=20
> On Sun, Dec 18, 2016 at 9:17 PM, Alessandro De Salvo <Alessandro.DeSalvo@=
roma1.infn.it> wrote:
> Hi,
> oh, so you have only 2 physical servers? I've understood they were 3! Wel=
l, in this case ceph would not work very well, too few resources and redunda=
ncy. You could try a replica 2, but it's not safe. Having a replica 3 could b=
e forced, but you would end up with a server with 2 replicas, which is dange=
rous/useless.
> Okay, so you use nfs as storage domain, but in your setup the HA
is not g=
uaranteed: if a physical machine goes down and it's the one where the
storag=
e domain resides you are lost. Why not using gluster instead of nfs for the o=
virt disks? You can still reserve a small gluster space for the non-ceph mac=
hines (for example a cinder VM) and ceph for the rest. Where do you have you=
r cinder running?
> Cheers,
>=20
> Alessandro
>=20
>> Il giorno 18 dic 2016, alle ore 18:05, rajatjpatel <rajatjpatel(a)gmail.co=
m> ha scritto:
>>=20
>> Hi Alessandro,
>>=20
>> Right now I have 2 physical server where I have host ovirt these are HP p=
roliant dl 380 each server 1*500GB SAS & 1TB *4 SAS Disk and 1*500GB SSD. S=
o right now I have use only one disk which 500GB of SAS for my ovirt to run o=
n both server. rest are not in use. At present I am using NFS which coming f=
rom mapper to ovirt as storage, go forward we like to use all these disk as =
hyper-converged for ovirt. RH I could see there is KB for using gluster. Bu=
t we are looking for Ceph bcoz best pref romance and scale.
>>=20
>> <Screenshot from 2016-12-18 21-03-21.png>
>> Regards
>> Rajat
>>=20
>> Hi
>>=20
>>=20
>> Regards,
>> Rajat Patel
>>=20
>>
http://studyhat.blogspot.com
>> FIRST THEY IGNORE YOU...
>> THEN THEY LAUGH AT YOU...
>> THEN THEY FIGHT YOU...
>> THEN YOU WIN...
>>=20
>>=20
>>> On Sun, Dec 18, 2016 at 8:49 PM, Alessandro De Salvo <Alessandro.DeSalv=
o(a)roma1.infn.it> wrote:
>>> Hi Rajat,
>>> sorry but I do not really have a clear picture of your actual setup, ca=
n you please explain a bit more?
>>> In particular:
>>>=20
>>> 1) what to you mean by using 4TB for ovirt? In which machines and how d=
o you make it available to ovirt?
>>>=20
>>> 2) how do you plan to use ceph with ovirt?
>>>=20
>>> I guess we can give more help if you clarify those points.
>>> Thanks,
>>>=20
>>> Alessandro=20
>>>=20
>>>> Il giorno 18 dic 2016, alle ore 17:33, rajatjpatel
<rajatjpatel(a)gmail.=
com> ha scritto:
>>>>=20
>>>> Great, thanks! Alessandro ++ Yaniv ++=20
>>>>=20
>>>> What I want to use around 4 TB of SAS disk for my Ovirt (which going t=
o be RHV4.0.5 once POC get 100% successful, in fact all product will be RH )=
>>>>=20
>>>> I had done so much duckduckgo for all these solution and use lot of re=
ference from
ovirt.org &
access.redhat.com for setting up a Ovirt engine and=
hyp.
>>>>=20
>>>> We dont mind having more guest running and creating ceph block storage=
and which will be presented to ovirt as storage. Gluster is not is use righ=
t now bcoz we have DB will be running on guest.
>>>>=20
>>>> Regard
>>>> Rajat=20
>>>>=20
>>>>> On Sun, Dec 18, 2016 at 8:21 PM Alessandro De Salvo
<Alessandro.DeSal=
vo(a)roma1.infn.it> wrote:
>>>>> Hi,
>>>>> having a 3-node ceph cluster is the bare minimum you can have to
make=
it working, unless you want to have just a replica-2 mode, which is not saf=
e.
>>>>> It's not true that ceph is not easy to
configure, you might use very e=
asily ceph-deploy, have puppet configuring it or
even run it in containers. U=
sing docker is in fact the easiest solution, it really requires 10 minutes t=
o make a cluster up. I've tried it both with jewel (official containers) and=
kraken (custom containers), and it works pretty well.
>>>>> The real problem is not creating and configuring
a ceph cluster, but u=
sing it from ovirt, as it requires cinder, i.e. a minimal
setup of openstack=
. We have it and it's working pretty well, but it requires some work. For yo=
ur reference we have cinder running on an ovirt VM using gluster.
>>>>> Cheers,
>>>>>=20
>>>>> Alessandro=20
>>>>>=20
>>>>>> Il giorno 18 dic 2016, alle ore 17:07, Yaniv Kaul
<ykaul(a)redhat.com>=
ha scritto:
>>>>>>=20
>>>>>>=20
>>>>>>=20
>>>>>> On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel
<rajatjpatel(a)gmail.com>=
wrote:
>>>>>> =E2=80=8BDear Team,
>>>>>>=20
>>>>>> We are using Ovirt 4.0 for POC what we are doing I want to check
wit=
h all Guru's Ovirt.
>>>>>>=20
>>>>>> We have 2 hp proliant dl 380 with 500GB SAS & 1TB *4 SAS Disk
and 50=
0GB SSD.
>>>>>>=20
>>>>>> Waht we are done we have install ovirt hyp on these h/w and we
have p=
hysical server where we are running our manager for ovirt. For ovirt hyp we
a=
re using only one 500GB of one HDD rest we have kept for ceph, so we have 3 n=
ode as guest running on ovirt and for ceph. My question you all is what I am=
doing is right or wrong.
>>>>>>=20
>>>>>> I think Ceph requires a lot more resources than above. It's
also a b=
it more challenging to configure. I would highly recommend a 3-node
cluster w=
ith Gluster.
>>>>>> Y.
>>>>>> =20
>>>>>>=20
>>>>>> Regards
>>>>>> Rajat=E2=80=8B
>>>>>>=20
>>>>>>=20
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>=20
>>>>>>=20
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>=20
>>>> --=20
>>>> Sent from my Cell Phone - excuse the typos & auto incorrect
>>>>=20
>>=20
=20
--Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type"
content=3D"text/html; charset=3D=
utf-8"></head><body
dir=3D"auto"><div></div><div>Hi
Rajat,</div><div>OK, I s=
ee. Well, so just consider that ceph will not work at best in your setup, un=
less you add at least a physical machine. Same is true for ovirt if you are o=
nly using native NFS, as you loose a real HA.</div><div>Having said this, of=
course you choose what's best for your site or affordable, but your setup l=
ooks quite fragile to me. Happy to help more if you need.</div><div>Regards,=
</div><div><br></div><div>
Alessandro</div><div><br>Il giorno 18=
dic 2016, alle ore 18:22, rajatjpatel <<a href=3D"mailto:rajatjpatel@gma=
il.com">rajatjpatel(a)gmail.com</a>&gt; ha
scritto:<br><br></div><blockquote t=
ype=3D"cite"><div><div dir=3D"ltr"><div
class=3D"gmail_default" style=3D"fon=
t-family:comic sans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Alessa=
ndro,<br><br></div><div class=3D"gmail_default"
style=3D"font-family:comic s=
ans ms,sans-serif;font-size:large;color:rgb(0,0,255)">Right now I dont have c=
inder running in my setup in case if ceph don't work then I have get one vm r=
unning open stack all in one and have all these disk connect my open stack u=
sing cinder I can present storage to my ovirt.<br><br></div><div
class=3D"gm=
ail_default" style=3D"font-family:comic sans ms,sans-serif;font-size:large;c=
olor:rgb(0,0,255)">At the same time I not getting case study for the same.<b=
r></div><div class=3D"gmail_default" style=3D"font-family:comic
sans ms,sans=
-serif;font-size:large;color:rgb(0,0,255)"><br></div><div
class=3D"gmail_def=
ault" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:rg=
b(0,0,255)">Regards<br></div><div class=3D"gmail_default"
style=3D"font-fami=
ly:comic sans
ms,sans-serif;font-size:large;color:rgb(0,0,255)">Rajat<br></d=
iv></div><div class=3D"gmail_extra"><br
clear=3D"all"><div><div class=3D"gma=
il_signature" data-smartmail=3D"gmail_signature"><div
dir=3D"ltr"><div><font=
face=3D"tahoma, sans-serif" size=3D"4"
style=3D"background-color:rgb(243,24=
3,243)" color=3D"#0000ff">Hi</font></div><font
face=3D"tahoma, sans-serif" s=
ize=3D"4" style=3D"background-color:rgb(243,243,243)"
color=3D"#0000ff"><div=
<font face=3D"tahoma, sans-serif" size=3D"4"
style=3D"background-color:rgb(=
243,243,243)"
color=3D"#0000ff"><br></font></div><div><font
face=3D"tahoma, s=
ans-serif" size=3D"4" style=3D"background-color:rgb(243,243,243)"
color=3D"#=
0000ff"><br></font></div>Regards,<br>Rajat
Patel<br><br><a href=3D"http://st=
udyhat.blogspot.com/"
target=3D"_blank">http://studyhat.blogspot.com</a><br>=
FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN THEY FIGHT YOU=
...<br>THEN YOU
WIN...</font><br><br></div></div></div>
<br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 9:17 PM,
Alessandro D=
e Salvo <span dir=3D"ltr"><<a
href=3D"mailto:Alessandro.DeSalvo@roma1.inf=
n.it"
target=3D"_blank">Alessandro.DeSalvo(a)roma1.infn.it</a>&gt;</span>
wrot=
e:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;border-le=
ft:1px #ccc solid;padding-left:1ex"><div
dir=3D"auto"><div></div><div>Hi,</d=
iv><div>oh, so you have only 2 physical servers? I've understood they were
3=
! Well, in this case ceph would not work very well, too few resources and re=
dundancy. You could try a replica 2, but it's not safe. Having a replica 3 c=
ould be forced, but you would end up with a server with 2 replicas, which is=
dangerous/useless.</div><div>Okay, so you use nfs as storage domain, but in=
your setup the HA is not guaranteed: if a physical machine goes down and it=
's the one where the storage domain resides you are lost. Why not using glus=
ter instead of nfs for the ovirt disks? You can still reserve a small gluste=
r space for the non-ceph machines (for example a cinder VM) and ceph for the=
rest. Where do you have your cinder
running?</div><div>Cheers,</div><div><b=
r></div><div> Alessandro</div><span
class=3D""><div><br>Il gior=
no 18 dic 2016, alle ore 18:05, rajatjpatel <<a href=3D"mailto:rajatjpate=
l(a)gmail.com" target=3D"_blank">rajatjpatel(a)gmail.com</a>&gt;
ha scritto:<br>=
<br></div></span><blockquote
type=3D"cite"><div><div dir=3D"ltr"><div class=3D=
"gmail_default" style=3D"font-family:comic sans
ms,sans-serif;font-size:larg=
e;color:rgb(0,0,255)">Hi Alessandro,<br><br></div><div
class=3D"gmail_defaul=
t" style=3D"font-family:comic sans ms,sans-serif;font-size:large;color:rgb(0=
,0,255)"><span class=3D"">Right now I have 2 physical server where
I have ho=
st ovirt these are HP proliant dl 380 each server 1*500GB SAS & 1T=
B *4 SAS Disk and 1*500GB SSD. So right now I have use only one disk which 5=
00GB of SAS for my ovirt to run on both server. rest are not in use. At pres=
ent I am using NFS which coming from mapper to ovirt as storage, go forward w=
e like to use all these disk as hyper-converged for ovirt. RH I could s=
ee there is KB for using gluster. But we are looking for Ceph bcoz best pref=
romance and scale.<br><br></span><Screenshot from 2016-12-18
21-03-21.pn=
g><br></div><div class=3D"gmail_default"
style=3D"font-family:comic sans m=
s,sans-serif;font-size:large;color:rgb(0,0,255)">Regards<br></div><div
class=
=3D"gmail_default" style=3D"font-family:comic sans
ms,sans-serif;font-size:l=
arge;color:rgb(0,0,255)">Rajat<br></div></div><div><div
class=3D"h5"><div cl=
ass=3D"gmail_extra"><br clear=3D"all"><div><div
class=3D"m_79061084183831192=
04gmail_signature" data-smartmail=3D"gmail_signature"><div
dir=3D"ltr"><div>=
<font style=3D"background-color:rgb(243,243,243)" size=3D"4"
color=3D"#0000f=
f" face=3D"tahoma, sans-serif">Hi</font></div><font
style=3D"background-colo=
r:rgb(243,243,243)" size=3D"4" color=3D"#0000ff"
face=3D"tahoma, sans-serif"=
<div><font
style=3D"background-color:rgb(243,243,243)" size=3D"4" color=3D"=
#0000ff" face=3D"tahoma,
sans-serif"><br></font></div><div><font
style=3D"ba=
ckground-color:rgb(243,243,243)" size=3D"4" color=3D"#0000ff"
face=3D"tahoma=
, sans-serif"><br></font></div>Regards,<br>Rajat
Patel<br><br><a href=3D"htt=
p://studyhat.blogspot.com/"
target=3D"_blank">http://studyhat.blogspot.com</=
a><br>FIRST THEY IGNORE YOU...<br>THEN THEY LAUGH AT YOU...<br>THEN
THEY FIG=
HT YOU...<br>THEN YOU
WIN...</font><br><br></div></div></div>
<br><div class=3D"gmail_quote">On Sun, Dec 18, 2016 at 8:49 PM,
Alessandro D=
e Salvo <span dir=3D"ltr"><<a
href=3D"mailto:Alessandro.DeSalvo@roma1.inf=
n.it"
target=3D"_blank">Alessandro.DeSalvo(a)roma1.<wbr>infn.it</a>&gt;</span>=
wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0
.8ex;bord=
er-left:1px #ccc solid;padding-left:1ex"><div
dir=3D"auto"><div></div><div>H=
i Rajat,</div><div>sorry but I do not really have a clear picture of your ac=
tual setup, can you please explain a bit more?</div><div>In
particular:</div=
<div><br></div><div>1) what to you mean by
using 4TB for ovirt? In which ma=
chines and how do you make it available to
ovirt?</div><div><br></div><div>2=
) how do you plan to use ceph with
ovirt?</div><div><br></div><div>I guess w=
e can give more help if you clarify those
points.</div><div>Thanks,</div><di=
v><br></div><div>
Alessandro </div><div><div class=3D"m_790=
6108418383119204h5"><div><br>Il giorno 18 dic 2016, alle ore 17:33,
rajatjpa=
tel <<a href=3D"mailto:rajatjpatel@gmail.com"
target=3D"_blank">rajatjpat=
el(a)gmail.com</a>&gt; ha scritto:<br><br></div><blockquote
type=3D"cite"><div=
<div
dir=3D"ltr"><div><div><div><div><div>Great,
thanks! Alessandro ++ Yani=
v ++ <br><br></div>What I want to use
around 4 TB of SAS disk for my Ovirt (=
which going to be RHV4.0.5 once POC get 100% successful, in fact all product=
will be RH )<br><br></div>I had done so much duckduckgo for all these
solut=
ion and use lot of reference from <a href=3D"http://ovirt.org"
target=3D"_bl=
ank">ovirt.org</a> & <a
href=3D"http://access.redhat.com" target=3D"_bla=
nk">access.redhat.com</a> for setting up a Ovirt engine and
hyp.<br><br></di=
v>We dont mind having more guest running and creating ceph block storage and=
which will be presented to ovirt as storage. Gluster is not is use right no=
w bcoz we have DB will be running on
guest.<br><br></div>Regard<br></div>Raj=
at <br></div><br><div class=3D"gmail_quote"><div
dir=3D"ltr">On Sun, Dec 18,=
2016 at 8:21 PM Alessandro De Salvo <<a href=3D"mailto:Alessandro.DeSalv=
o(a)roma1.infn.it"
target=3D"_blank">Alessandro.DeSalvo(a)roma1.infn<wbr>.it</a>=
> wrote:<br></div><blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0=
.8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir=3D"auto"
class=3D=
"m_7906108418383119204m_324330166984056793gmail_msg"><div
class=3D"m_7906108=
418383119204m_324330166984056793gmail_msg"></div><div
class=3D"m_79061084183=
83119204m_324330166984056793gmail_msg">Hi,</div><div
class=3D"m_790610841838=
3119204m_324330166984056793gmail_msg">having a 3-node ceph cluster is the ba=
re minimum you can have to make it working, unless you want to have just a r=
eplica-2 mode, which is not safe.</div><div
class=3D"m_7906108418383119204m_=
324330166984056793gmail_msg">It's not true that ceph is not easy to configur=
e, you might use very easily ceph-deploy, have puppet configuring it or even=
run it in containers. Using docker is in fact the easiest solution, it real=
ly requires 10 minutes to make a cluster up. I've tried it both with jewel (=
official containers) and kraken (custom containers), and it works pretty wel=
l.</div><div
class=3D"m_7906108418383119204m_324330166984056793gmail_msg">Th=
e real problem is not creating and configuring a ceph cluster, but using it f=
rom ovirt, as it requires cinder, i.e. a minimal setup of openstack. We have=
it and it's working pretty well, but it requires some work. For your refere=
nce we have cinder running on an ovirt VM using gluster.</div><div
class=3D"=
m_7906108418383119204m_324330166984056793gmail_msg">Cheers,</div><div
class=3D=
"m_7906108418383119204m_324330166984056793gmail_msg"><br
class=3D"m_79061084=
18383119204m_324330166984056793gmail_msg"></div><div
class=3D"m_790610841838=
3119204m_324330166984056793gmail_msg">
Alessandro </div></d=
iv><div dir=3D"auto"
class=3D"m_7906108418383119204m_324330166984056793gmail=
_msg"><div
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><br c=
lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Il giorno 18
dic=
2016, alle ore 17:07, Yaniv Kaul <<a href=3D"mailto:ykaul@redhat.com"
cl=
ass=3D"m_7906108418383119204m_324330166984056793gmail_msg"
target=3D"_blank"=
ykaul(a)redhat.com</a>&gt; ha scritto:<br
class=3D"m_7906108418383119204m_324=
330166984056793gmail_msg"><br
class=3D"m_7906108418383119204m_32433016698405=
6793gmail_msg"></div><blockquote type=3D"cite"
class=3D"m_790610841838311920=
4m_324330166984056793gmail_msg"><div
class=3D"m_7906108418383119204m_3243301=
66984056793gmail_msg"><div dir=3D"ltr"
class=3D"m_7906108418383119204m_32433=
0166984056793gmail_msg"><br
class=3D"m_7906108418383119204m_3243301669840567=
93gmail_msg"><div class=3D"gmail_extra
m_7906108418383119204m_32433016698405=
6793gmail_msg"><br
class=3D"m_7906108418383119204m_324330166984056793gmail_m=
sg"><div class=3D"gmail_quote
m_7906108418383119204m_324330166984056793gmail=
_msg">On Sun, Dec 18, 2016 at 3:29 PM, rajatjpatel <span dir=3D"ltr"
class=3D=
"m_7906108418383119204m_324330166984056793gmail_msg"><<a
href=3D"mailto:r=
ajatjpatel(a)gmail.com" class=3D"m_7906108418383119204m_324330166984056793gmai=
l_msg"
target=3D"_blank">rajatjpatel(a)gmail.com</a>&gt;</span>
wrote:<br clas=
s=3D"m_7906108418383119204m_324330166984056793gmail_msg"><blockquote
class=3D=
"gmail_quote m_7906108418383119204m_324330166984056793gmail_msg"
style=3D"ma=
rgin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div
dir=3D"ltr=
" class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><div
class=3D=
"m_7906108418383119204m_324330166984056793m_-4293604042961126787m_-815575019=
4716306479gmail_signature m_7906108418383119204m_324330166984056793gmail_msg=
"><div dir=3D"ltr"
class=3D"m_7906108418383119204m_324330166984056793gmail_m=
sg"><div
class=3D"m_7906108418383119204m_324330166984056793gmail_msg">=E2=80=
=8BDear Team,<br class=3D"m_7906108418383119204m_324330166984056793gmail_msg=
"><br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg">We are us=
ing Ovirt 4.0 for POC what we are doing I want to check with all Guru's Ovir=
t.<br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><br
class=
=3D"m_7906108418383119204m_324330166984056793gmail_msg">We have 2 hp
prolian=
t dl 380 with 500GB SAS & 1TB *4 SAS Disk and 500GB SSD.<br class=3D"m_7=
906108418383119204m_324330166984056793gmail_msg"><br
class=3D"m_790610841838=
3119204m_324330166984056793gmail_msg">Waht we are done we have install ovirt=
hyp on these h/w and we have physical server where we are running our manag=
er for ovirt. For ovirt hyp we are using only one 500GB of one HDD rest we h=
ave kept for ceph, so we have 3 node as guest running on ovirt and for ceph.=
My question you all is what I am doing is right or wrong.<br class=3D"m_790=
6108418383119204m_324330166984056793gmail_msg"></div></div></div></div></blo=
ckquote><div
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><b=
r
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><div
cl=
ass=3D"m_7906108418383119204m_324330166984056793gmail_msg">I think Ceph
requ=
ires a lot more resources than above. It's also a bit more challenging to co=
nfigure. I would highly recommend a 3-node cluster with Gluster.</div><div c=
lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Y.</div><div
cla=
ss=3D"m_7906108418383119204m_324330166984056793gmail_msg"> </div><block=
quote class=3D"gmail_quote m_7906108418383119204m_324330166984056793gmail_ms=
g" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc
solid;padding-left:1ex"><=
div dir=3D"ltr"
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"=
<div
class=3D"m_7906108418383119204m_324330166984056793m_-42936040429611267=
87m_-8155750194716306479gmail_signature m_7906108418383119204m_3243301669840=
56793gmail_msg"><div dir=3D"ltr"
class=3D"m_7906108418383119204m_32433016698=
4056793gmail_msg"><div
class=3D"m_7906108418383119204m_324330166984056793gma=
il_msg"><br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></d=
iv><div
class=3D"m_7906108418383119204m_324330166984056793gmail_msg">Regards=
<br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><div
c=
lass=3D"m_7906108418383119204m_324330166984056793gmail_msg">Rajat=E2=80=8B</=
div><br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div><=
/div>
</div>
<br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg">___________=
___________________<wbr>_________________<br
class=3D"m_7906108418383119204m=
_324330166984056793gmail_msg">
Users mailing list<br class=3D"m_7906108418383119204m_324330166984056793gmai=
l_msg">
<a href=3D"mailto:Users@ovirt.org"
class=3D"m_7906108418383119204m_324330166=
984056793gmail_msg" target=3D"_blank">Users(a)ovirt.org</a><br
class=3D"m_7906=
108418383119204m_324330166984056793gmail_msg">
<a
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer"=
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"
target=3D"_bla=
nk">http://lists.ovirt.org/mailman<wbr>/listinfo/users</...
class=3D"m_790=
6108418383119204m_324330166984056793gmail_msg">
<br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></blockquot=
e></div><br
class=3D"m_7906108418383119204m_324330166984056793gmail_msg"></d=
iv></div>
</div></blockquote><blockquote type=3D"cite"
class=3D"m_7906108418383119204m=
_324330166984056793gmail_msg"><div
class=3D"m_7906108418383119204m_324330166=
984056793gmail_msg"><span
class=3D"m_7906108418383119204m_324330166984056793=
gmail_msg">______________________________<wbr>_________________</span><br
cl=
ass=3D"m_7906108418383119204m_324330166984056793gmail_msg"><span
class=3D"m_=
7906108418383119204m_324330166984056793gmail_msg">Users mailing
list</span><=
br class=3D"m_7906108418383119204m_324330166984056793gmail_msg"><span
class=3D=
"m_7906108418383119204m_324330166984056793gmail_msg"><a
href=3D"mailto:Users=
@ovirt.org" class=3D"m_7906108418383119204m_324330166984056793gmail_msg"
tar=
get=3D"_blank">Users(a)ovirt.org</a></span><br
class=3D"m_7906108418383119204m=
_324330166984056793gmail_msg"><span
class=3D"m_7906108418383119204m_32433016=
6984056793gmail_msg"><a
href=3D"http://lists.ovirt.org/mailman/listinfo/user=
s" class=3D"m_7906108418383119204m_324330166984056793gmail_msg"
target=3D"_b=
lank">http://lists.ovirt.org/mailman<wbr>/listinfo/users<...
class=
=3D"m_7906108418383119204m_324330166984056793gmail_msg"></div></blockquote><=
/div></blockquote></div><div dir=3D"ltr">--
<br></div><div data-smartmail=3D=
"gmail_signature"><p dir=3D"ltr">Sent from my Cell Phone -
excuse the typos &=
amp; auto incorrect</p>
</div>
</div></blockquote></div></div></div></blockquote></div><br></div>
</div></div></div></blockquote></div></blockquote></div><br></div>
</div></blockquote></body></html>=
--Apple-Mail-BEC7796A-D99C-4DBE-B890-89AF12E27C9A--