Hi Juan,
thanks for your info, I'll try to test FreeNAS with compression. Do you use it with iSCSI or NFS?
JoseFrom: "Juan Jose" <jj197005@gmail.com>
To: suporte@logicworks.pt, users@ovirt.org
Sent: Segunda-feira, 3 de Junho de 2013 13:37:21
Subject: Re: [Users] deduplicationHello Jose,We also have FreeNAS working in our infraestructure, with about 3 TB and ZFS. Some of the pools has compression enabled and you can save space with it. We have this FreeNAS connected to a hypervisor Xen and it works very well and it's stable and sure. We have nine virtual servers some wirtualized and other paravirtualized, and some Windows Server machine all about 2 years in production without any problem. My idea is connect this infrastructure with oVirt wo be able to have some resources for test VMs in that. Only wanted to share as another FreeNas success experience.Juanjo.On Fri, May 31, 2013 at 12:33 PM, <suporte@logicworks.pt> wrote:
Thanks a lot Karli, you make my mind clear about deduplication, once again we cannot have the best of both worlds.
I'll try FreeNAS despite my poor knowledge on FreeBSD. Openfiler, running on Linux, has no better performance but supports DRDB.
JoseFrom: "Karli Sjöberg" <Karli.Sjoberg@slu.se>Sent: Sexta-feira, 31 de Maio de 2013 10:45:41
To: suporte@logicworks.pt
Cc: "Jiri Belka" <jbelka@redhat.com>, users@ovirt.org
Subject: Re: [Users] deduplication
fre 2013-05-31 klockan 09:50 +0100 skrev suporte@logicworks.pt:So, we can say that dedup has more disadvantages than advantages.
For a primary system; most definitely, yes.
But for a backup system, that has tons of RAM and SSD's for cache, and you have lots of virtual machines that are based off of the template, or are very much the same, then you have a real use-case. I´m active at the FreeBSD forums where one person reports storing 150TB of data in only 30TB of physical disk. The best practice of scrubbing is once a week on "enterprise" systems, though he is only able to do it once a month, because that´s how long it takes for a scrub to complete in that system. So you´ve got to choose performance or savings, you can´t have both.
And what about dedup of Netapp?
Much better implementation, in my opinion. You are able schedule dedup-runs to go at night so your user´s performance isn´t impacted, and you get the savings. The question is if you value the savings enough to take on price-tag that is NetApp. Or just build your own FreeBSD/ZFS server with compression enabled and buy in standard HDD's from anywhere... We did;)
/Karli
Jose
From: "Karli Sjöberg" <Karli.Sjoberg@slu.se>
To: suporte@logicworks.pt
Cc: "Jiri Belka" <jbelka@redhat.com>, users@ovirt.org
Sent: Quinta-feira, 30 de Maio de 2013 8:33:19
Subject: Re: [Users] deduplication
ons 2013-05-29 klockan 09:59 +0100 skrev suporte@logicworks.pt:
Absolutely agree with you, planning is the best thing to do, but normally people want a plug'n'play system with all included, because there is not much time to think and planning, and there are many companies that know how to take advantage of this people characteristics.
Any way, I think another solution for dedup is FreeNAS using ZFS.
FreeNAS is just FreeBSD with a fancy web-ui ontop, so it´s neither more or less of ZFS than you would have otherwise, And regarding dedup in ZFS; Just don´t, it´s not worth it! It´s said that it may increase performance when you have a very suitable usecase, e.g. everything exactly the same over and over. What´s not said is that scrubbing and resilvering slows down to a snail (from hundreds of MB/s, or GB if your system is large enough, down to less than 10), just from dedup. Also deleting snapshots of datasets that have(or have had) dedup on can kill the entire system, and when I say kill, I mean really fubar. Been there, regretted that... Now, compression on the other hand, you get basically for free and gives decent savings, I highly recommend that.
/Karli
Jose
From: "Jiri Belka" <jbelka@redhat.com>
To: suporte@logicworks.pt
Cc: users@ovirt.org
Sent: Quarta-feira, 29 de Maio de 2013 7:33:10
Subject: Re: [Users] deduplication
On Tue, 28 May 2013 14:29:05 +0100 (WEST)
suporte@logicworks.pt wrote:
> That's why I'm making this questions, to demystify some buzzwords around here.
> But if you have a strong and good technology why not create buzzwords to get into as many people as possible? without trapped them.
> Share a disk containing "static" data is a good idea, do you know from where I can start?
Everything depends on your needs, design planning. Maybe then sharing
disk would be better to share via NFS/iscsi. Of course if you have many
VMs each of them is different you will fail. But if you have mostly
homogeneous environment you can think about this approach. Sure you have
to have plan for upgrading "base" "static" shared OS data, you have to
have plan how to install additional software (different destination
than /usr or /usr/local)... If you already have your own build host
which builds for you OS packages and you have already your own plan for
deployment, you have done first steps. If you depend on upgrading each
machine separately from Internet, then first you should plan your
environment, configuration management etc.
Well, in many times people do not do any planning, they just think some
good technology would save their "poor" design.
j.
--
Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg@slu.se
--
Med Vänliga Hälsningar
-------------------------------------------------------------------------------
Karli Sjöberg
Swedish University of Agricultural Sciences
Box 7079 (Visiting Address Kronåsvägen 8)
S-750 07 Uppsala, Sweden
Phone: +46-(0)18-67 15 66
karli.sjoberg@slu.se
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users