--_=__=_XaM3_.1389253825.2A.969204.42.21473.52.42.007.201569504
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
you can use flashcache under centos6, it's stable and give you a boost fo=
r read/write, but I never user with
gluster:https://github.com/facebook/f=
lashcache/under fedora you have more choice: flashcache, bcache, dm-cache=
regardsaDate: Wed, 8 Jan 2014 21:44:35 -0600From: Darrell Budic <darrell.=
budic(a)zenfire.com>To: Russell Purinton <russ(a)sonicbx.com>Cc: "users@ovirt=
.org" <users(a)ovirt.org>Subject: Re: [Users] SSD CachingMessage-ID: <A4505=
9D4-B00D-4573-81E7-F00B2B9FA4AA(a)zenfire.com>Content-Type: text/plain; cha=
rset=3D"windows-1252"Stick=0A your bricks on ZFS and let it do it for you=
. Works well, although I =0Ahaven?t done much benchmarking of it. My test=
setup is described in the =0Athread under [Users] Creation of preallocat=
ed disk with Gluster =0Areplication. I?ve seen some blog posts here and t=
here about gluster on =0AZFS for this reason too. -DarrellOn Jan 7, 2014,=
at 9:56 PM, Russell Purinton <russ(a)sonicbx.com> wrote:> [20:42] <sonicro=
se> is anybody out there using a good RAM+SSD caching system ahead of glu=
ster storage?> [20:42] <sonicrose> sorry if that came through twice>=0A [=
20:44] <sonicrose> im thinking about making the SSD one giant =0Aswap fil=
e then creating a very large ramdisk in virtual memory and using=0A that =
as a block level cache for parts and pieces of virtual machine =0Adisk im=
ages> [20:44] <sonicrose> then i think the memory =0Amanagers would inher=
ently play the role of storage tiering ie: keeping =0Athe hottest data in=
memory and the coldest data on swap> [20:45] =0A<sonicrose> everything i=
have seen today has been setup as =0A"consumer" =3D=3D=3D> network =3D=
=3D=3D=3D> SSD cache =3D=3D=3D=3D> real disks> [20:45] <sonicrose> but
i'=
d like to actually do "consumer" =3D=3D=3D> RAM+SSD cache =3D=3D=3D>
net=
work =3D=3D=3D> real disks>=0A [20:46] <sonicrose> i realize doing a virt=
ual memory disk means =0Athe cache will be cleared on every reboot, and I=
'm ok with that> =0A[20:47] <sonicrose> i know this can be done with NFS =
and =0Acachefilesd(fscache), but how could something be integrated into t=
he =0Anative gluster clients?> [20:47] <sonicrose> i'd prefer not to have=
to access gluster via NFS>=0A [20:49] <sonicrose> any feedback from this=
room is greatly =0Aappreciated, getting someone started to build managed=
HA cloud hosting>=0ADa: users-bounces(a)ovirt.org=0AA: users(a)ovirt.org=0AC=
c: =0AData: Thu, 09 Jan 2014 02:34:48 -0500=0AOggetto: Users Digest, Vol =
28, Issue 61=0A=0A> Send Users mailing list submissions to=0A> users@ovi=
rt.org=0A> =0A> To subscribe or unsubscribe via the World Wide Web, visit=
=0A>
http://lists.ovirt.org/mailman/listinfo/users=0A> or, via email, se=
nd a message with subject or body 'help' to=0A> users-request(a)ovirt.org=0A=
=0A> You can reach the person managing the list at=0A>
users-owner@ovi=
rt.org=0A> =0A> When replying, please edit your Subject line
so it is mor=
e specific=0A> than "Re: Contents of Users digest..."=0A> =0A> =0A>
Today=
's Topics:=0A> =0A> 1. Re: SSD Caching (Darrell Budic)=0A> 2. Re: O=
virt DR setup (Hans Emmanuel)=0A> 3. Re: Experience with low cost NFS-=
Storage as VM-Storage?=0A> (Markus Stockhausen)=0A> 4. Re: Exper=
ience with low cost NFS-Storage as VM-Storage?=0A> (Karli Sj?berg)=0A=
5. Re: Experience with low cost NFS-Storage as VM-Storage?
(squadra)=
=0A> =0A> =0A>
----------------------------------------------------------=
------------=0A> =0A> Message: 1=0A> Date: Wed, 8 Jan 2014 21:44:35 -0600=
=0A> From: Darrell Budic =0A> To: Russell Purinton =0A> Cc: "users(a)ovirt.=
org" =0A> Subject: Re: [Users] SSD Caching=0A> Message-ID: =0A> Content-T=
ype: text/plain; charset=3D"windows-1252"=0A> =0A> Stick your bricks on
Z=
FS and let it do it for you. Works well, although I haven?t done much ben=
chmarking of it. My test setup is described in the thread under [Users] C=
reation of preallocated disk with Gluster replication. I?ve seen some blo=
g posts here and there about gluster on ZFS for this reason too.=0A> =0A>=
-Darrell=0A> =0A> On Jan 7, 2014, at 9:56 PM, Russell Purinton wrote:=0A=
=0A> > [20:42] is anybody out there using a good RAM+SSD
caching syste=
m ahead of gluster storage?=0A> > [20:42] sorry if that came
through twi=
ce=0A> > [20:44] im thinking about making the SSD one giant swap file th=
en creating a very large ramdisk in virtual memory and using that as a bl=
ock level cache for parts and pieces of virtual machine disk images=0A> >=
[20:44] then i think the memory managers would inherently play the role=
of storage tiering ie: keeping the hottest data in memory and the coldes=
t data on swap=0A> > [20:45] everything i have seen today has been setup=
as "consumer" =3D=3D=3D> network =3D=3D=3D=3D> SSD cache
=3D=3D=3D=3D=
real disks=0A> > [20:45] but i'd like to actually do
"consumer" =3D=3D=
=3D> RAM+SSD cache =3D=3D=3D> network
=3D=3D=3D> real disks=0A> > [20:46=
] i realize doing a virtual memory disk means the cache will be cleared =
on every reboot, and I'm ok with that=0A> > [20:47] i know this can be d=
one with NFS and cachefilesd(fscache), but how could something be integra=
ted into the native gluster clients?=0A> > [20:47] i'd prefer not to hav=
e to access gluster via NFS=0A> > [20:49] any feedback from this room is=
greatly appreciated, getting someone started to build managed HA cloud h=
osting=0A> > _______________________________________________=0A> > Users =
mailing list=0A> > Users(a)ovirt.org=0A> >
http://lists.ovirt.org/mailman/l=
istinfo/users=0A> =0A> -------------- next part --------------=0A> An HTM=
L attachment was scrubbed...=0A> URL: =0A> =0A> -------------------------=
-----=0A> =0A> Message: 2=0A> Date: Thu, 9 Jan 2014 10:34:26 +0530=0A> Fr=
om: Hans Emmanuel =0A> To: users(a)ovirt.org=0A> Subject: Re: [Users] Ovirt=
DR setup=0A> Message-ID:=0A> =0A> Content-Type: text/plain; charset=3D"=
iso-8859-1"=0A> =0A> Could any one please give me some suggestions ?=0A> =
=0A> =0A> On Wed, Jan 8, 2014 at 11:39 AM, Hans Emmanuel wrote:=0A> =0A> =
Hi all ,=0A> >=0A> > I would like to know about the
possibility of setu=
p Disaster Recovery Site=0A> > (DR) for an Ovirt cluster
. i.e if site 1 =
goes down I need to trigger the=0A> > site 2 to come in to action with th=
e minimal down time .=0A> >=0A> > I am open to use NFS shared storage or =
local storage for data storage=0A> > domain . I know we need to replicate=
the storage domain and Ovirt confs and=0A> > DB across the sites , but =
couldn't find any doc for the same , isn't that=0A> > possible with Ovirt=
?=0A> >=0A> > *Hans Emmanuel*=0A> >=0A> >=0A> > *NOthing
to FEAR but so=
mething to FEEL......*=0A> >=0A> >=0A> =0A> =0A> -- =0A> *Hans
Emmanuel*=0A=
=0A> *NOthing to FEAR but something to FEEL......*=0A>
-------------- n=
ext part --------------=0A> An HTML attachment was
scrubbed...=0A> URL: =0A=
=0A> ------------------------------=0A> =0A> Message:
3=0A> Date: Thu, =
9 Jan 2014 07:10:07 +0000=0A> From: Markus Stockhausen
=0A> To: squadra ,=
"users(a)ovirt.org" =0A> Subject: Re: [Users] Experience with low cost NFS=
-Storage as=0A> VM-Storage?=0A> Message-ID:=0A> <12EF8D94C6F8734FB2FF37=
B9FBEDD173585B991E(a)EXCHANGE.collogia.de>=0A> Content-Type: text/plain; ch=
arset=3D"us-ascii"=0A> =0A> > Von: users-bounces(a)ovirt.org
[users-bounces=
@ovirt.org]" im Auftrag von "squadra [squadra(a)gmail.com]=0A> > Gesendet:
=
Mittwoch, 8. Januar 2014 17:15=0A> > An: users(a)ovirt.org=0A> > Betreff: R=
e: [Users] Experience with low cost NFS-Storage as VM-Storage?=0A> >=0A> =
better go for iscsi or something else... i whould avoid nfs for vm
host=
ing=0A> > Freebsd10 delivers kernel iscsitarget now, which works great
so=
far. or go with omnios to get comstar iscsi, which is a rocksolid soluti=
on=0A> >=0A> > Cheers,=0A> > =0A> > Juergen=0A> =0A> That is
usually a ma=
tter of taste and the available environment. =0A> The minimal differences=
in performance usually only show up=0A> if you drive the storage to its =
limits. I guess you could help Sven =0A> better if you had some hard fact=
s why to favour ISCSI. =0A> =0A> Best regards.=0A> =0A> Markus=0A> ------=
-------- next part --------------=0A> An embedded and charset-unspecified=
text was scrubbed...=0A> Name: InterScan_Disclaimer.txt=0A> URL: =0A> =0A=
------------------------------=0A> =0A> Message: 4=0A> Date:
Thu, 9 Jan=
2014 07:30:56 +0000=0A> From: Karli Sj?berg =0A> To:
"stockhausen@collog=
ia.de" =0A> Cc: "squadra(a)gmail.com" , "users(a)ovirt.org"=0A>
=0A> Subject=
: Re: [Users] Experience with low cost NFS-Storage as=0A> VM-Storage?=0A=
Message-ID:
<5F9E965F5A80BC468BE5F40576769F095AFE3369@exchange2-1>=0A> =
Content-Type:
text/plain; charset=3D"utf-8"=0A> =0A> On Thu, 2014-01-09 a=
t 07:10 +0000, Markus Stockhausen wrote:=0A> > > Von: users-bounces@ovirt=
.org [users-bounces(a)ovirt.org]" im Auftrag von "squadra [squadra(a)gmail.co=
m]=0A> > > Gesendet: Mittwoch, 8. Januar 2014 17:15=0A> > > An:
users@ovi=
rt.org=0A> > > Betreff: Re: [Users] Experience with low cost NFS-Storage =
as VM-Storage?=0A> > >=0A> > > better go for iscsi or something else...
i=
whould avoid nfs for vm hosting=0A> > > Freebsd10 delivers kernel iscsit=
arget now, which works great so far. or go with omnios to get comstar isc=
si, which is a rocksolid solution=0A> > >=0A> > > Cheers,=0A> >
> =0A> > =
Juergen=0A> > =0A> > That is usually a matter of taste
and the availabl=
e environment. =0A> > The minimal differences in performance
usually only=
show up=0A> > if you drive the storage to its limits. I guess you could =
help Sven =0A> > better if you had some hard facts why to favour ISCSI. =0A=
> =0A> > Best regards.=0A> > =0A> >
Markus=0A> =0A> Only technical diff=
erence I can think of is the
iSCSI-level=0A> load-balancing. With NFS you=
set up the network with LACP and let that=0A> load-balance for you (and =
you should probably do that with iSCSI as well=0A> but you don?t strictly=
have to). I think it has to do with a chance of=0A> trying to go beyond =
the capacity of 1 network interface at the same=0A> time, from one Host (=
higher bandwidth) that makes people try iSCSI=0A> instead of plain NFS. I=
have tried that but was never able to achieve=0A> that effect, so in our=
situation, there?s no difference. In comparing=0A> them both in benchmar=
ks, there was no performance difference at all, at=0A> least for our stor=
age systems that are based on FreeBSD.=0A> =0A> /K=0A> =0A> -------------=
-----------------=0A> =0A> Message: 5=0A> Date: Thu, 9 Jan 2014 08:34:44 =
+0100=0A> From: squadra =0A> To: Markus Stockhausen =0A> Cc: users(a)ovirt.=
org=0A> Subject: Re: [Users] Experience with low cost NFS-Storage as=0A> =
VM-Storage?=0A> Message-ID:=0A> =0A> Content-Type: text/plain; charset=3D=
"utf-8"=0A> =0A> There's are already enaugh articles on the web about
NFS=
problems related=0A> locking, latency, etc.... Eh stacking a protocol on=
to another to fix=0A> problem and then maybe one more to glue them togeth=
er.=0A> =0A> Google for the suse PDF " why NFS sucks", I don't agree
with=
the whole=0A> sheet.. NFS got his place,too. But not as production filer=
for VM.=0A> =0A> Cheers,=0A> =0A> Juergen, the NFS lover=0A> On Jan 9, 2=
014 8:10 AM, "Markus Stockhausen" =0A> wrote:=0A> =0A> > > Von:
users-bou=
nces(a)ovirt.org [users-bounces(a)ovirt.org]" im Auftrag von=0A> > "squadra
[=
squadra(a)gmail.com]=0A> > > Gesendet: Mittwoch, 8. Januar 2014 17:15=0A> >=
An: users(a)ovirt.org=0A> > > Betreff: Re: [Users] Experience
with low c=
ost NFS-Storage as VM-Storage?=0A> > >=0A> > > better
go for iscsi or som=
ething else... i whould avoid nfs for vm=0A> > hosting=0A> > > Freebsd10 =
delivers kernel iscsitarget now, which works great so far. or=0A> > go wi=
th omnios to get comstar iscsi, which is a rocksolid solution=0A> > >=0A>=
> Cheers,=0A> > >=0A> > > Juergen=0A>
>=0A> > That is usually a matter=
of taste and the available
environment.=0A> > The minimal differences in=
performance usually only show up=0A> > if you drive the storage to its l=
imits. I guess you could help Sven=0A> > better if you had some hard fact=
s why to favour ISCSI.=0A> >=0A> > Best regards.=0A> >=0A> >
Markus=0A> -=
------------- next part --------------=0A> An HTML attachment was scrubbe=
d...=0A> URL: =0A> =0A> ------------------------------=0A> =0A> _________=
______________________________________=0A> Users mailing list=0A> Users@o=
virt.org=0A>
http://lists.ovirt.org/mailman/listinfo/users=0A> =0A> =0A> =
End of Users Digest, Vol 28, Issue 61=0A> *******************************=
******
--_=__=_XaM3_.1389253825.2A.969204.42.21473.52.42.007.201569504
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
=0A<div class=3D"xam_msg_class">=0A<div style=3D"font: normal 13px
Arial;=
color:rgb(31, 28, 27);"><br>you can use flashcache under centos6, it's
s=
table and give you a boost for read/write, but I never user with gluster:=
<
br><br>https://github.com/facebook/flashcache/<br><br&g...
fedora you h=
ave more choice: flashcache, bcache,
dm-cache<br><br>regards<br>a<br><br>=
Date: Wed, 8 Jan 2014 21:44:35 -0600<br>From: Darrell Budic <darrell.b=
udic(a)zenfire.com&gt;<br>To: Russell Purinton
&lt;russ(a)sonicbx.com&gt;<br>=
Cc: "users(a)ovirt.org" &lt;users(a)ovirt.org&gt;<br>Subject: Re:
[Users] SSD=
Caching<br>Message-ID: &lt;A45059D4-B00D-4573-81E7-F00B2B9FA4AA(a)zenfire.=
com><br>Content-Type: text/plain;
charset=3D"windows-1252"<br><br>Stic=
k=0A your bricks on ZFS and let it do it for you. Works well, although I =
=0Ahaven?t done much benchmarking of it. My test setup is described in th=
e =0Athread under [Users] Creation of preallocated disk with Gluster =0Ar=
eplication. I?ve seen some blog posts here and there about gluster on =0A=
ZFS for this reason too.<br><br> -Darrell<br><br>On Jan 7, 2014,
at 9:56 =
PM, Russell Purinton &lt;russ(a)sonicbx.com&gt; wrote:<br><br>>
[20:42] =
<sonicrose> is anybody out there using a good RAM+SSD caching syste=
m ahead of gluster storage?<br>> [20:42] <sonicrose> sorry if
th=
at came through twice<br>>=0A [20:44] <sonicrose> im thinking
ab=
out making the SSD one giant =0Aswap file then creating a very large ramd=
isk in virtual memory and using=0A that as a block level cache for parts =
and pieces of virtual machine =0Adisk images<br>> [20:44] <sonicros=
e> then i think the memory =0Amanagers would inherently play the role =
of storage tiering ie: keeping =0Athe hottest data in memory and the cold=
est data on swap<br>> [20:45] =0A<sonicrose> everything i have
s=
een today has been setup as =0A"consumer" =3D=3D=3D> network =3D=3D=
=3D=3D> SSD cache =3D=3D=3D=3D> real disks<br>> [20:45]
<soni=
crose> but i'd like to actually do "consumer" =3D=3D=3D>
RAM+SSD ca=
che =3D=3D=3D> network =3D=3D=3D> real disks<br>>=0A [20:46]
&l=
t;sonicrose> i realize doing a virtual memory disk means =0Athe cache =
will be cleared on every reboot, and I'm ok with that<br>> =0A[20:47] =
<sonicrose> i know this can be done with NFS and =0Acachefilesd(fsc=
ache), but how could something be integrated into the =0Anative gluster c=
lients?<br>> [20:47] <sonicrose> i'd prefer not to have to
acces=
s gluster via NFS<br>>=0A [20:49] <sonicrose> any feedback from
=
this room is greatly =0Aappreciated, getting someone started to build man=
aged HA cloud
hosting<br>><br><br><br><br><br><br><br><br><br><br><br>=
<br><br>=0A<div><span style=3D"font-family:Arial;
font-size:11px; color:#=
5F5F5F;">Da</span><span style=3D"font-family:Arial; font-size:12px;
color=
:#5F5F5F; padding-left:5px;">:
users-bounces(a)ovirt.org</span></div>=0A<di=
v><span style=3D"font-family:Arial; font-size:11px;
color:#5F5F5F;">A</sp=
an><span style=3D"font-family:Arial; font-size:12px; color:#5F5F5F; paddi=
ng-left:5px;">: users(a)ovirt.org</span></div>=0A<div><span
style=3D"font-f=
amily:Arial; font-size:11px; color:#5F5F5F;">Cc</span><span
style=3D"font=
-family:Arial; font-size:12px; color:#5F5F5F; padding-left:5px;">: </span=
</div>=0A<div><span style=3D"font-family:Arial;
font-size:11px; color:#5=
F5F5F;">Data</span><span
style=3D"font-family:Arial; font-size:12px; colo=
r:#5F5F5F; padding-left:5px;">: Thu, 09 Jan 2014 02:34:48
-0500</span></d=
iv>=0A<div><span style=3D"font-family:Arial; font-size:11px;
color:#5F5F5=
F;">Oggetto</span><span style=3D"font-family:Arial; font-size:12px;
color=
:#5F5F5F; padding-left:5px;">: Users Digest, Vol 28, Issue
61</span></div=
=0A<br>=0A<div>> Send Users mailing list
submissions to</div><div>>=
;
users(a)ovirt.org</div><div>&gt; </div><div>> To
subscribe or unsubsc=
ribe via the World Wide Web, visit</div><div>>
http://lists.ovirt.org=
/mailman/listinfo/users</div><div>> or, via email, send a message
with=
subject or body 'help' to</div><div>>
users-request(a)ovirt.org</div><=
div>> </div><div>> You can reach the person managing the list
at</d=
iv><div>> users-owner(a)ovirt.org</div><div>&gt;
</div><div>> When r=
eplying, please edit your Subject line so it is more
specific</div><div>&=
gt; than "Re: Contents of Users digest..."</div><div>>
</div><div>>=
</div><div>> Today's Topics:</div><div>>
</div><div>> 1. Re:=
SSD Caching (Darrell Budic)</div><div>> 2. Re: Ovirt DR setup
(Han=
s Emmanuel)</div><div>> 3. Re: Experience with low cost
NFS-Storage=
as VM-Storage?</div><div>> (Markus
Stockhausen)</div><div>> =
4. Re: Experience with low cost NFS-Storage as
VM-Storage?</div><div>&=
gt; (Karli Sj?berg)</div><div>> 5. Re: Experience with low
co=
st NFS-Storage as VM-Storage? (squadra)</div><div>>
</div><div>> </=
div><div>> -----------------------------------------------------------=
-----------</div><div>> </div><div>> Message:
1</div><div>> Date=
: Wed, 8 Jan 2014 21:44:35 -0600</div><div>> From: Darrell Budic
<darr=
ell.budic@zenfire.com></darrell.budic(a)zenfire.com></div><div>&gt;
To: Rus=
sell Purinton
<russ@sonicbx.com></russ(a)sonicbx.com></div><div>&gt; Cc:
"u=
sers(a)ovirt.org"
<users@ovirt.org></users(a)ovirt.org></div><div>&gt; Subjec=
t: Re: [Users] SSD Caching</div><div>> Message-ID:
<a45059d4-b00d-4573=
-81e7-f00b2b9fa4aa@zenfire.com></a45059d4-b00d-4573-81e7-f00b2b9fa4aa(a)zen=
fire.com></div><div>> Content-Type: text/plain;
charset=3D"windows-125=
2"</div><div>> </div><div>> Stick your bricks
on ZFS and let it do =
it for you. Works well, although I haven?t done much benchmarking of it. =
My test setup is described in the thread under [Users] Creation of preall=
ocated disk with Gluster replication. I?ve seen some blog posts here and =
there about gluster on ZFS for this reason too.</div><div>>
</div><div=
> -Darrell</div><div>>
</div><div>> On Jan 7, 2014, at 9:56 PM=
, Russell Purinton
<russ(a)sonicbx.com> wrote:</russ@sonicbx.com></div><div=
> </div><div>> > [20:42]
<sonicrose> is anybody out there using=
a good RAM+SSD caching system ahead
of gluster storage?</sonicrose></div=
<div>> > [20:42] <sonicrose> sorry if that
came through twice</son=
icrose></div><div>> >
[20:44] <sonicrose> im thinking about making =
the SSD one giant swap file then creating a very large ramdisk in virtual=
memory and using that as a block level cache for parts and pieces of vir=
tual machine disk images</sonicrose></div><div>> > [20:44]
<sonicro=
se> then i think the memory managers would inherently play the role of st=
orage tiering ie: keeping the hottest data in memory and the coldest data=
on swap</sonicrose></div><div>> > [20:45]
<sonicrose> everything i=
have seen today has been setup as "consumer" =3D=3D=3D> network
=3D=
=3D=3D=3D> SSD cache =3D=3D=3D=3D> real
disks</sonicrose></div><div=
> > [20:45] <sonicrose> but i'd like to
actually do "consumer" =3D=
=3D=3D> RAM+SSD cache =3D=3D=3D>
network =3D=3D=3D> real disks<=
/sonicrose></div><div>> > [20:46] <sonicrose> i realize
doing a vir=
tual memory disk means the cache will be cleared on every reboot, and I'm=
ok with that</sonicrose></div><div>> > [20:47]
<sonicrose> i know =
this can be done with NFS and cachefilesd(fscache), but how could somethi=
ng be integrated into the native gluster
clients?</sonicrose></div><div>&=
gt; > [20:47] <sonicrose> i'd prefer not to have to access gluster via=
NFS</sonicrose></div><div>> > [20:49] <sonicrose>
any feedback fro=
m this room is greatly appreciated, getting someone started to build mana=
ged HA cloud hosting</sonicrose></div><div>> >
____________________=
___________________________</div><div>> > Users mailing
list</div><=
div>> > Users(a)ovirt.org</div><div>&gt; >
http://lists.ovirt.org/=
mailman/listinfo/users</div><div>> </div><div>>
-------------- next=
part --------------</div><div>> An HTML attachment was
scrubbed...</d=
iv><div>> URL: <http: lists.ovirt.org=3D""
pipermail=3D"" users=3D"" a=
ttachments=3D"" 20140108=3D"" 21aef6d2=3D""
attachment-0001.html=3D""></h=
ttp:></div><div>> </div><div>>
------------------------------</div>=
<div>> </div><div>> Message:
2</div><div>> Date: Thu, 9 Jan 2014=
10:34:26 +0530</div><div>> From: Hans Emmanuel
<hansemmanuel(a)gmail.co=
m></hansemmanuel(a)gmail.com></div><div>&gt; To:
users(a)ovirt.org</div><div>=
> Subject: Re: [Users] Ovirt DR setup</div><div>>
Message-ID:</div>=
<div>> <cakym+td8o3g+zfsfgybzegnnk+hxq=3D9tj9j9r1ky_thyemcxwa(a)mail.gm=
ail.com></cakym+td8o3g+zfsfgybzegnnk+hxq=3D9tj9j9r1ky_thyemcxwa(a)mail.gmai=
l.com></div><div>> Content-Type: text/plain;
charset=3D"iso-8859-1"</d=
iv><div>> </div><div>> Could any one please give me
some suggestion=
s ?</div><div>> </div><div>>
</div><div>> On Wed, Jan 8, 2014 at=
11:39 AM, Hans Emmanuel <hansemmanuel@gmail.com>wrote:</hansemmanuel@gma=
il.com></div><div>> </div><div>> > Hi all
,</div><div>> ><=
/div><div>> > I would like to know about the possibility of setup
D=
isaster Recovery Site</div><div>> > (DR) for an Ovirt cluster .
i.e=
if site 1 goes down I need to trigger the</div><div>> > site 2
to =
come in to action with the minimal down time .</div><div>>
></div><=
div>> > I am open to use NFS shared storage or local storage for da=
ta storage</div><div>> > domain . I know we need to replicate
the s=
torage domain and Ovirt confs and</div><div>> > DB across the
sites=
, but couldn't find any doc for the same , isn't
that</div><div>> &g=
t; possible with Ovirt ?</div><div>>
></div><div>> > *Hans E=
mmanuel*</div><div>> ></div><div>>
></div><div>> > *NOt=
hing to FEAR but something to FEEL......*</div><div>>
></div><div>&=
gt; ></div><div>> </div><div>>
</div><div>> -- </div><div>>=
; *Hans Emmanuel*</div><div>> </div><div>> *NOthing
to FEAR but som=
ething to FEEL......*</div><div>> -------------- next part
-----------=
---</div><div>> An HTML attachment was
scrubbed...</div><div>> URL:=
<http: lists.ovirt.org=3D"" pipermail=3D"" users=3D""
attachments=3D"" 2=
0140109=3D"" ae9bb53c=3D""
attachment-0001.html=3D""></http:></div><div>&=
gt; </div><div>>
------------------------------</div><div>> </div><=
div>> Message: 3</div><div>> Date: Thu, 9 Jan 2014 07:10:07
+0000</=
div><div>> From: Markus Stockhausen
<stockhausen(a)collogia.de></stockha=
usen(a)collogia.de></div><div>&gt; To: squadra <squadra(a)gmail.com>,
"users@=
ovirt.org"
<users@ovirt.org></users@ovirt.org></squadra(a)gmail.com></div><=
div>> Subject: Re: [Users] Experience with low cost NFS-Storage as</di=
v><div>> VM-Storage?</div><div>>
Message-ID:</div><div>> <1=
2EF8D94C6F8734FB2FF37B9FBEDD173585B991E(a)EXCHANGE.collogia.de&gt;</div><di=
v>> Content-Type: text/plain;
charset=3D"us-ascii"</div><div>> </di=
v><div>> > Von: users-bounces(a)ovirt.org
[users-bounces(a)ovirt.org]" =
im Auftrag von "squadra [squadra(a)gmail.com]</div><div>&gt; >
Gesendet:=
Mittwoch, 8. Januar 2014 17:15</div><div>> > An:
users(a)ovirt.org</=
div><div>> > Betreff: Re: [Users] Experience with low cost
NFS-Stor=
age as VM-Storage?</div><div>> ></div><div>>
> better go for =
iscsi or something else... i whould avoid nfs for vm
hosting</div><div>&g=
t; > Freebsd10 delivers kernel iscsitarget now, which works great so f=
ar. or go with omnios to get comstar iscsi, which is a rocksolid solution=
</div><div>> ></div><div>> >
Cheers,</div><div>> > </di=
v><div>> > Juergen</div><div>>
</div><div>> That is usually a=
matter of taste and the available environment. </div><div>> The
minim=
al differences in performance usually only show up</div><div>> if you
=
drive the storage to its limits. I guess you could help Sven
</div><div>&=
gt; better if you had some hard facts why to favour ISCSI.
</div><div>>=
; </div><div>> Best regards.</div><div>>
</div><div>> Markus</di=
v><div>> -------------- next part
--------------</div><div>> An emb=
edded and charset-unspecified text was scrubbed...</div><div>> Name:
I=
nterScan_Disclaimer.txt</div><div>> URL: <http:
lists.ovirt.org=3D"" p=
ipermail=3D"" users=3D"" attachments=3D""
20140109=3D"" 3dfd362d=3D"" att=
achment-0001.txt=3D""></http:></div><div>>
</div><div>> -----------=
-------------------</div><div>> </div><div>>
Message: 4</div><div>&=
gt; Date: Thu, 9 Jan 2014 07:30:56 +0000</div><div>> From: Karli
Sj?be=
rg
<karli.sjoberg@slu.se></karli.sjoberg(a)slu.se></div><div>&gt;
To: "stoc=
khausen(a)collogia.de"
<stockhausen@collogia.de></stockhausen(a)collogia.de><=
/div><div>> Cc: "squadra(a)gmail.com"
<squadra(a)gmail.com>, "users(a)ovirt.=
org"</squadra(a)gmail.com></div><div>&gt;
<users@ovirt.org></users(a)ovirt.o=
rg></div><div>> Subject: Re: [Users] Experience with low cost
NFS-Stor=
age as</div><div>> VM-Storage?</div><div>>
Message-ID: <5F9E965=
F5A80BC468BE5F40576769F095AFE3369@exchange2-1></div><div>>
Content-=
Type: text/plain; charset=3D"utf-8"</div><div>>
</div><div>> On Thu=
, 2014-01-09 at 07:10 +0000, Markus Stockhausen wrote:</div><div>>
>=
; > Von: users-bounces(a)ovirt.org [users-bounces(a)ovirt.org]" im Auftrag=
von "squadra [squadra(a)gmail.com]</div><div>&gt; > >
Gesendet: Mitt=
woch, 8. Januar 2014 17:15</div><div>> > > An:
users(a)ovirt.org</=
div><div>> > > Betreff: Re: [Users] Experience with low cost
NFS=
-Storage as VM-Storage?</div><div>> >
></div><div>> > >=
better go for iscsi or something else... i whould avoid nfs for vm hosti=
ng</div><div>> > > Freebsd10 delivers kernel iscsitarget
now, wh=
ich works great so far. or go with omnios to get comstar iscsi, which is =
a rocksolid solution</div><div>> >
></div><div>> > > Ch=
eers,</div><div>> > > </div><div>>
> > Juergen</div><di=
v>> > </div><div>> > That is usually a matter
of taste and th=
e available environment. </div><div>> > The minimal differences
in =
performance usually only show up</div><div>> > if you drive the
sto=
rage to its limits. I guess you could help Sven </div><div>> >
bett=
er if you had some hard facts why to favour ISCSI. </div><div>>
> <=
/div><div>> > Best regards.</div><div>> >
</div><div>> >=
; Markus</div><div>> </div><div>> Only technical
difference I can t=
hink of is the iSCSI-level</div><div>> load-balancing. With NFS you
se=
t up the network with LACP and let that</div><div>> load-balance for
y=
ou (and you should probably do that with iSCSI as well</div><div>>
but=
you don?t strictly have to). I think it has to do with a chance of</div>=
<div>> trying to go beyond the capacity of 1 network interface at the =
same</div><div>> time, from one Host (higher bandwidth) that makes
peo=
ple try iSCSI</div><div>> instead of plain NFS. I have tried that but
=
was never able to achieve</div><div>> that effect, so in our
situation=
, there?s no difference. In comparing</div><div>> them both in
benchma=
rks, there was no performance difference at all, at</div><div>> least
=
for our storage systems that are based on FreeBSD.</div><div>>
</div><=
div>> /K</div><div>> </div><div>>
------------------------------=
</div><div>> </div><div>> Message:
5</div><div>> Date: Thu, 9 Ja=
n 2014 08:34:44 +0100</div><div>> From: squadra
<squadra(a)gmail.com></s=
quadra(a)gmail.com></div><div>&gt; To: Markus Stockhausen
<stockhausen@coll=
ogia.de></stockhausen(a)collogia.de></div><div>&gt; Cc:
users(a)ovirt.org</di=
v><div>> Subject: Re: [Users] Experience with low cost NFS-Storage
as<=
/div><div>> VM-Storage?</div><div>>
Message-ID:</div><div>> <c=
abx=3D=3Da33=3Dtq=3Dxzsbyssyfgxsycfheab7sxhgu8bx7fmhksj5aa(a)mail.gmail.com=
</cabx=3D=3Da33=3Dtq=3Dxzsbyssyfgxsycfheab7sxhgu8bx7fmhksj5aa(a)mail.gmail=
.com></div><div>> Content-Type: text/plain;
charset=3D"utf-8"</div><di=
v>> </div><div>> There's are already enaugh articles on
the web abo=
ut NFS problems related</div><div>> locking, latency, etc.... Eh
stack=
ing a protocol onto another to fix</div><div>> problem and then maybe
=
one more to glue them together.</div><div>>
</div><div>> Google for=
the suse PDF " why NFS sucks", I don't agree with the
whole</div><div>&g=
t; sheet.. NFS got his place,too. But not as production filer for VM.</di=
v><div>> </div><div>>
Cheers,</div><div>> </div><div>> Juerge=
n, the NFS lover</div><div>> On Jan 9, 2014 8:10 AM, "Markus
Stockhaus=
en"
<stockhausen@collogia.de></stockhausen(a)collogia.de></div><div>&gt;
wr=
ote:</div><div>> </div><div>> > > Von:
users-bounces(a)ovirt.or=
g [users-bounces(a)ovirt.org]" im Auftrag von</div><div>> >
"squadra =
[squadra(a)gmail.com]</div><div>&gt; > > Gesendet: Mittwoch,
8. Janua=
r 2014 17:15</div><div>> > > An:
users(a)ovirt.org</div><div>&gt; =
> > Betreff: Re: [Users] Experience with low cost NFS-Storage as VM=
-Storage?</div><div>> > ></div><div>>
> > better go for=
iscsi or something else... i whould avoid nfs for vm</div><div>>
>=
hosting</div><div>> > > Freebsd10 delivers kernel
iscsitarget n=
ow, which works great so far. or</div><div>> > go with omnios to
ge=
t comstar iscsi, which is a rocksolid solution</div><div>> >
></=
div><div>> > > Cheers,</div><div>>
> ></div><div>> &=
gt; > Juergen</div><div>>
></div><div>> > That is usually =
a matter of taste and the available environment.</div><div>> >
The =
minimal differences in performance usually only show up</div><div>>
&g=
t; if you drive the storage to its limits. I guess you could help Sven</d=
iv><div>> > better if you had some hard facts why to favour
ISCSI.<=
/div><div>> ></div><div>> > Best
regards.</div><div>> >=
</div><div>> > Markus</div><div>>
-------------- next part -----=
---------</div><div>> An HTML attachment was
scrubbed...</div><div>>=
; URL: <http: lists.ovirt.org=3D"" pipermail=3D""
users=3D"" attachments=3D=
"" 20140109=3D"" 3b206609=3D""
attachment.html=3D""></http:></div><div>&g=
t; </div><div>>
------------------------------</div><div>> </div><d=
iv>>
_______________________________________________</div><div>> Us=
ers mailing list</div><div>>
Users(a)ovirt.org</div><div>&gt;
http://lis=
ts.ovirt.org/mailman/listinfo/users</div><div>>
</div><div>> </div>=
<div>> End of Users Digest, Vol 28, Issue 61</div><div>>
**********=
***************************</div></div>=0A</div>=0A
--_=__=_XaM3_.1389253825.2A.969204.42.21473.52.42.007.201569504--