[ovirt-users] Using Microsoft NFS server as storage domain

Pavel Gashev Pax at acronis.com
Thu Jan 21 12:54:10 UTC 2016


Hello,

First of all I would like to ask if anybody has an experience with using Microsoft NFS server as a storage domain.

The main issue with MS NFS is NTFS :) NTFS doesn't support sparse files. Technically it's possible by enabling NTFS compression but  it has bad performance on huge files which is our case. Also there is no option in oVirt web interface to use COW format on NFS storage domains.

Since it looks like oVirt doesn't support MS NFS, I decided to migrate all my VMs out of MS NFS to another storage. And I hit a bug. Live storage migration silently corrupts data if you migrate a disk from MS NFS storage domain. So if you shutdown just migrated VM and check filesystem you find that it has a lot of unrecoverable errors.

There are the following symptoms:
1. It corrupts data if you migrate a disk from MS NFS to Linux NFS
2. It corrupts data if you migrate a disk from MS NFS to iSCSI
3. There is no corruption if you migrate from Linux NFS to iSCSI and vice versa.
4. There is no corruption if you migrate from anywhere to MS NFS.
5. Data corruption happens after 'Auto-generated for Live Storage Migration' snapshot. So if you rollback the snapshot, you could see absolutely clean filesystem.
6. It doesn't depend on SPM. So it corrupts data if SPM is on the same host, or another.
7. There are no error messages in vdsm/qemu/system logs.

Yes, of course I could migrate from MS NFS with downtime – it's not an issue. The issue is that oVirt does silently corrupt data under some circumstances.

Could you please help me to understand the reason of data corruption?

vdsm-4.17.13-1.el7.noarch
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
libvirt-daemon-1.2.17-13.el7_2.2.x86_64
ovirt-engine-backend-3.6.1.3-1.el7.centos.noarch

Thank you


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160121/ac5ddae5/attachment-0001.html>


More information about the Users mailing list