[ovirt-users] Using Microsoft NFS server as storage domain

Nir Soffer nsoffer at redhat.com
Thu Jan 21 23:23:54 UTC 2016


Adding Allon

On Thu, Jan 21, 2016 at 10:55 PM, Nir Soffer <nsoffer at redhat.com> wrote:
> On Thu, Jan 21, 2016 at 10:13 PM, Pavel Gashev <Pax at acronis.com> wrote:
>> On Thu, 2016-01-21 at 18:42 +0000, Nir Soffer wrote:
>>
>> On Thu, Jan 21, 2016 at 2:54 PM, Pavel Gashev <Pax at acronis.com> wrote:
>>
>> Also there is no option in
>> oVirt web interface to use COW format on NFS storage domains.
>>
>>
>> You can
>> 1. create a small disk (1G)
>> 2. create a snapshot
>> 3. extend the disk go the final size
>>
>> And you have nfs with cow format. The performance difference with one
>> snapshot
>> should be small.
>>
>>
>> Yes. And there are other workarounds:
>> 1. Use some block (i.e. iSCSI) storage for creating a thin provisioned disk
>> (which is COW) and then move it to required storage.
>> 2. Keep an empty 1G COW disk and copy+resize it when required.
>> 3. Use ovirt-shell for creating disks.
>>
>> Unfortunately, these are not native ways. These are ways for a hacker. Plain
>> user clicks "New" in "Disks" tab and selects "Thin Provision" allocation
>> policy. It's hard to explain to users that the simplest and obvious way is
>> wrong. I hope it's wrong only for MS NFS.
>
> Sure I agree.
>
> I think we do not use qcow format on file storage since there is no
> need for this,
> the file system is always sparse. I guess we did not plan to use MS NFS.
>
> I would open bug for supporting qcow format on file storage. If this works for
> some users, I think this is an option that should be possible in the
> ui. Hopefully
> there are no too many assumptions in the code about this.
>
> Allon, do you see any reason not to support this for user that need this option?
>
>>
>> 5. Data corruption happens after 'Auto-generated for Live Storage Migration'
>> snapshot. So if you rollback the snapshot, you could see absolutely clean
>> filesystem.
>>
>>
>> Can you try to create a live-snapshot on MS NFS? It seems that this is the
>> issue, not live storage migration.
>>
>>
>> Live snapshots work very well on MS NFS. Creating and deleting works live
>> without any issues. I did it many times. Please note that everything before
>> the snapshot remains consistent. Data corruption occurs after the snapshot.
>> So only non-snapshotted data is corrupted.
>
> live migration starts by creating a snapshot, then copying the disks to the new
> storage, and then mirroring the active layer so both the old and the
> new disks are
> the same. Finally we switch to the new disk, and delete the old disk.
>
> So probably the issue is in the mirroring step. This is most likely a
> qemu issue.
>
>>
>> Do you have qemu-guest-agent on the vm? Without qemu-guest-agent, file
>> systems on the guest will no be freezed during the snapshot, which may cause
>> inconsistent snapshot.
>>
>>
>> I tried it with and without qemu-guest-agent. It doesn't depend.
>>
>> Can you reproduce this with virt-manager, or by creating a vm and taking
>> a snapshot using virsh?
>>
>>
>> Sorry, I'm not sure how I can reproduce the issue using virsh.
>
> I'll try to get instructions for this from libvirt developers. If this
> happen with
> libvirt alone, this is a libvirt or qemu bug, and there is little we (ovirt) can
> do about it.
>
>>
>>
>> Please file a bug and attach:
>>
>> - /var/log/vdsm/vdsm.log
>> - /var/log/messages
>> - /var/log/sanlock.log
>> - output of  nfsstat during the test, maybe run it every minute?
>>
>>
>> Ok, I will collect the logs and fill a bug.
>>
>> Thanks
>>
>>



More information about the Users mailing list