On Mon, Feb 18, 2019 at 2:00 PM Markus Schaufler <
markus.schaufler(a)digit-all.at> wrote:
Hi all,
I've got a design question:
Are there any best practices regarding NFS datastores especially datastore
sizing? Should I use one big NFS datastore and expand it on demand or
should the size not exceed a certain limit and start with a new NFS
datastore?
Unless you have a reason to use multiple storage domains, like separating
storage for
different groups (production,testing, development) less storage domain is
best. Every storage
domain includes monitoring and ioprocess child process so you don't want to
create many
storage domain for no reason.
We don't have any limit on the size of a storage domain. We do support up
to 50 storage
domains.
Are there any other configuration considerations (NFS v3 or v4(.1)
and
mounting options)?
I think the only NFS version that should be used now is 4.2. It gives
*huge* performance
improvements because it supports sparseness.
Here are some examples:
- creating preallocated disk is instant, instead of minutes/hours
(depending on disk zie)
with NFS < 4.2
- copying preallocated disks can be much faster because qemu can read only
the allocated
parts and can zero unallocated parts instantly.
You can see here how much faster is copying raw disk from NFS 4.2 vs block
storage:
https://bugzilla.redhat.com/show_bug.cgi?id=1511891#c57
Coping here the relevant part:
## Copying from NFS 4.2 to FC storage domain
...
image qemu-img qemu-img/-W dd parallel-dd
----------------------------------------------------------
100/19G 242 41 165 128
## Copying from FC storage domain to FC storage domain
...
image qemu-img qemu-img/-W dd parallel-dd
----------------------------------------------------------
100/19G 383 194 178 141
As you can see, copying from NFS 4.2 was 4.5 times faster (41 vs 194). When
copying from NFS to NFS the difference is much smaller (242 vs 383).
You can add mount options if you have special needs via engine UI or REST
API / SDK.
Nir