[ovirt-users] moving disk from one storage domain to another

Michal Skrivanek michal.skrivanek at redhat.com
Sun Apr 9 07:29:00 UTC 2017


> On 9 Apr 2017, at 09:16, Yaniv Kaul <ykaul at redhat.com> wrote:
> 
> 
> 
> On Fri, Apr 7, 2017 at 5:29 PM, Bill James <bill.james at j2.com <mailto:bill.james at j2.com>> wrote:
> 
> 
> On 4/7/17 12:52 AM, Nir Soffer wrote:
>> On Fri, Apr 7, 2017 at 2:40 AM Bill James <bill.james at j2.com <mailto:bill.james at j2.com>> wrote:
>> We are trying to convert our qa environment from local nfs to gluster.
>> When I move a disk with a VM that is running on same server as the
>> storage it fails.
>> When I move a disk with VM running on a different system it works.
>> 
>> VM running on same system as disk:
>> 
>> 2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]
>> (vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start
>> replication for vda to {u'domainID':
>> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volumeInfo': {'domainID':
>> u'6affd8c3-2c
>> 51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
>> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
>> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, 'diskType': 'file',
>> 'format': 'cow', 'cache': 'none', u'volumeID':
>> u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', u'imageID':
>> u'7ae9b3f7-3507-4469-a080-d0944d0ab753', u'poolID':
>> u'8b6303b3-79c6-4633-ae21-71b15ed00675', u'device': 'disk', 'path':
>> u'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
>> 'propagateErrors': u'off', 'volumeChain': [{'domainID':
>> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
>> 'leaseOffset': 0, 'path':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d',
>> 'volumeID': u'6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'leasePath':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d.lease',
>> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, {'domainID':
>> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
>> 'leaseOffset': 0, 'path':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',
>> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':
>> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',
>> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}]} (vm:3594)
>> Traceback (most recent call last):
>>    File "/usr/share/vdsm/virt/vm.py", line 3588, in diskReplicateStart
>>      self._startDriveReplication(drive)
>>    File "/usr/share/vdsm/virt/vm.py", line 3713, in _startDriveReplication
>>      self._dom.blockCopy(drive.name <http://drive.name/>, destxml, flags=flags)
>>    File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line
>> 69, in f
>>      ret = attr(*args, **kwargs)
>>    File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
>> line 123, in wrapper
>>      ret = f(*args, **kwargs)
>>    File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in
>> wrapper
>>      return func(inst, *args, **kwargs)
>>    File "/usr/lib64/python2.7/site-packages/libvirt.py", line 684, in
>> blockCopy
>>      if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',
>> dom=self)
>> libvirtError: internal error: unable to execute QEMU command
>> 'drive-mirror': Could not open
>> '/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5':
>> Permission denied
>> 
>> 
>> [root at ovirt1 test vdsm]# ls -l
>> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
>> -rw-rw---- 2 vdsm kvm 197120 Apr  6 13:29
>> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
>> 
>> 
>> 
>> Then if I try and rerun it it says, even though move failed:
>> 
>> 2017-04-06 13:49:27,197 INFO  (jsonrpc/1) [dispatcher] Run and protect:
>> getAllTasksStatuses, Return response: {'allT
>> asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212,
>> 'message': 'Volume already exists', 'taskState':
>>   'finished', 'taskResult': 'cleanSuccess', 'taskID':
>> '078d962c-e682-40f9-a177-2a8b479a7d8b'}}} (logUtils:52)
>> 
>> 
>> So now I have to clean up the disks that it failed to move so I can
>> migrate the VM and then move the disk again.
>> Or so it seems.
>> Failed move disks do exist in new location, even though it "failed".
>> 
>> vdsm.log attached.
>> 
>> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch
>> vdsm-4.19.4-1.el7.centos.x86_64
>> 
>> Hi Bill,
>> 
>> Does it work after setting selinux to permissive? (setenforce 0)
>> 
>> Can you share output of:
>> 
>> ps -efZ | grep vm-name
>> (filter the specific vm) 
>> 
>> ls -lhZ /rhev/data-center/mnt
>> 
>> ls -lhZ /rhev/data-center/mnt/gluster-server:_path/sd_id/images/img_id/vol_id
>> (assuming the volume was not deleted after the operation).
>> 
>> If the volume is not deleted after the failed move disk operation, this is likely
>> a bug, please file a bug for this.
>> 
>> The actual failure may be gluster configuration issue, or selinux related bug.
>> 
>> Nir
>>  
> 
> SELinux status:                 disabled
> 
> This is a less tested configuration. We usually run with selinux enabled.

Disabled specifically is untested. And doesn’t work.
If you for whatever reason don’t want it, set it to Permissive, but don’t disable it


> Y.
>  
> 
> [root at ovirt1 test images]# ps -efZ | grep darmaster
> -                               root      5272     1  6 Mar17 ?        1-09:08:58 /usr/libexec/qemu-kvm -name guest=darmaster1.test.j2noc.com <http://darmaster1.test.j2noc.com/>,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 73368460-92e1-4c9e-a162-399304f1c462 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=30343536-3138-584D-5134-343430313833,uuid=73368460-92e1-4c9e-a162-399304f1c462 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-17T23:57:54,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/7e566f55-e060-47b7-bfa4-ac3c48d70dda/images/33db5688-dafe-40ab-9dd0-a826a90c3793/38de110d-464c-4735-97ba-3d623ee1a1b6,format=raw,if=none,id=drive-virtio-disk0,serial=33db5688-dafe-40ab-9dd0-a826a90c3793,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=40 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:69,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.176.30.96:6 <http://10.176.30.96:6/>,password -k en-us -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on
> 
> [root at ovirt1 test ~]# ls -lhZ /rhev/data-center/mnt
> drwxr-xr-x vdsm kvm ?                                glusterSD
> drwxr-xr-x vdsm kvm ?                                netappqa3:_vol_cloud__images_ovirt__QA__ISOs
> drwxr-xr-x vdsm kvm ?                                netappqa3:_vol_cloud__storage1_ovirt__qa__inside
> drwxr-xr-x vdsm kvm ?                                ovirt1-ks.test.j2noc.com:_ovirt-store_nfs1
> drwxr-xr-x vdsm kvm ?                                ovirt2-ks.test.j2noc.com:_ovirt-store_nfs2
> drwxr-xr-x vdsm kvm ?                                ovirt2-ks.test.j2noc.com:_ovirt-store_nfs-2
> drwxr-xr-x vdsm kvm ?                                ovirt3-ks.test.j2noc.com:_ovirt-store_nfs
> drwxr-xr-x vdsm kvm ?                                ovirt4-ks.test.j2noc.com:_ovirt-store_nfs
> drwxr-xr-x vdsm kvm ?                                ovirt5-ks.test.j2noc.com:_ovirt-store_nfs
> drwxr-xr-x vdsm kvm ?                                ovirt6-ks.test.j2noc.com:_ovirt-store_nfs
> drwxr-xr-x vdsm kvm ?                                ovirt7-ks.test.j2noc.com:_ovirt-store_nfs
> drwxr-xr-x vdsm kvm ?                                qagenfil1-nfs1:_ovirt__inside_Export
> drwxr-xr-x vdsm kvm ?                                qagenfil1-nfs1:_ovirt__inside_images
> 
> 
> [root at ovirt1 test images]# ls -lhZa /rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/33db5688-dafe-40ab-9dd0-a826a90c3793
> drwxr-xr-x vdsm kvm ?                                .
> drwxr-xr-x vdsm kvm ?                                ..
> -rw-rw---- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2
> -rw-rw---- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2.lease
> -rw-r--r-- vdsm kvm ?                                33c04305-efbe-418a-b42c-07f5f76214f2.meta
> -rw-rw---- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6
> -rw-rw---- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6.lease
> -rw-r--r-- vdsm kvm ?                                38de110d-464c-4735-97ba-3d623ee1a1b6.meta
> 
> 
> bug submitted:  https://bugzilla.redhat.com/show_bug.cgi?id=1440198 <https://bugzilla.redhat.com/show_bug.cgi?id=1440198>
> 
> 
> Thank you!
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
> 
> 
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170409/108fe0da/attachment-0001.html>


More information about the Users mailing list