
--------------705BA3C205C526559706549D Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit On 4/7/17 12:52 AM, Nir Soffer wrote:
On Fri, Apr 7, 2017 at 2:40 AM Bill James <bill.james@j2.com <mailto:bill.james@j2.com>> wrote:
We are trying to convert our qa environment from local nfs to gluster. When I move a disk with a VM that is running on same server as the storage it fails. When I move a disk with VM running on a different system it works.
VM running on same system as disk:
2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm] (vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start replication for vda to {u'domainID': u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volumeInfo': {'domainID': u'6affd8c3-2c 51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease', 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, 'diskType': 'file', 'format': 'cow', 'cache': 'none', u'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', u'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753', u'poolID': u'8b6303b3-79c6-4633-ae21-71b15ed00675', u'device': 'disk', 'path': u'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'propagateErrors': u'off', 'volumeChain': [{'domainID': u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'volumeID': u'6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d.lease', 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, {'domainID': u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath': u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease', 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}]} (vm:3594) Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 3588, in diskReplicateStart self._startDriveReplication(drive) File "/usr/share/vdsm/virt/vm.py", line 3713, in _startDriveReplication self._dom.blockCopy(drive.name <http://drive.name>, destxml, flags=flags) File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in wrapper return func(inst, *args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 684, in blockCopy if ret == -1: raise libvirtError ('virDomainBlockCopy() failed', dom=self) libvirtError: internal error: unable to execute QEMU command 'drive-mirror': Could not open '/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5': Permission denied
[root@ovirt1 test vdsm]# ls -l /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5 -rw-rw---- 2 vdsm kvm 197120 Apr 6 13:29 /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5
Then if I try and rerun it it says, even though move failed:
2017-04-06 13:49:27,197 INFO (jsonrpc/1) [dispatcher] Run and protect: getAllTasksStatuses, Return response: {'allT asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212, 'message': 'Volume already exists', 'taskState': 'finished', 'taskResult': 'cleanSuccess', 'taskID': '078d962c-e682-40f9-a177-2a8b479a7d8b'}}} (logUtils:52)
So now I have to clean up the disks that it failed to move so I can migrate the VM and then move the disk again. Or so it seems. Failed move disks do exist in new location, even though it "failed".
vdsm.log attached.
ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch vdsm-4.19.4-1.el7.centos.x86_64
Hi Bill,
Does it work after setting selinux to permissive? (setenforce 0)
Can you share output of:
ps -efZ | grep vm-name (filter the specific vm)
ls -lhZ /rhev/data-center/mnt
ls -lhZ /rhev/data-center/mnt/gluster-server:_path/sd_id/images/img_id/vol_id (assuming the volume was not deleted after the operation).
If the volume is not deleted after the failed move disk operation, this is likely a bug, please file a bug for this.
The actual failure may be gluster configuration issue, or selinux related bug.
Nir
SELinux status: disabled [root@ovirt1 test images]# ps -efZ | grep darmaster - root 5272 1 6 Mar17 ? 1-09:08:58 /usr/libexec/qemu-kvm -name guest=darmaster1.test.j2noc.com,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 73368460-92e1-4c9e-a162-399304f1c462 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=30343536-3138-584D-5134-343430313833,uuid=73368460-92e1-4c9e-a162-399304f1c462 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-17T23:57:54,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/7e566f55-e060-47b7-bfa4-ac3c48d70dda/images/33db5688-dafe-40ab-9dd0-a826a90c3793/38de110d-464c-4735-97ba-3d623ee1a1b6,format=raw,if=none,id=drive-virtio-disk0,serial=33db5688-dafe-40ab-9dd0-a826a90c3793,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=40 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:69,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.176.30.96:6,password -k en-us -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on [root@ovirt1 test ~]# ls -lhZ /rhev/data-center/mnt drwxr-xr-x vdsm kvm ? glusterSD drwxr-xr-x vdsm kvm ? netappqa3:_vol_cloud__images_ovirt__QA__ISOs drwxr-xr-x vdsm kvm ? netappqa3:_vol_cloud__storage1_ovirt__qa__inside drwxr-xr-x vdsm kvm ? ovirt1-ks.test.j2noc.com:_ovirt-store_nfs1 drwxr-xr-x vdsm kvm ? ovirt2-ks.test.j2noc.com:_ovirt-store_nfs2 drwxr-xr-x vdsm kvm ? ovirt2-ks.test.j2noc.com:_ovirt-store_nfs-2 drwxr-xr-x vdsm kvm ? ovirt3-ks.test.j2noc.com:_ovirt-store_nfs drwxr-xr-x vdsm kvm ? ovirt4-ks.test.j2noc.com:_ovirt-store_nfs drwxr-xr-x vdsm kvm ? ovirt5-ks.test.j2noc.com:_ovirt-store_nfs drwxr-xr-x vdsm kvm ? ovirt6-ks.test.j2noc.com:_ovirt-store_nfs drwxr-xr-x vdsm kvm ? ovirt7-ks.test.j2noc.com:_ovirt-store_nfs drwxr-xr-x vdsm kvm ? qagenfil1-nfs1:_ovirt__inside_Export drwxr-xr-x vdsm kvm ? qagenfil1-nfs1:_ovirt__inside_images [root@ovirt1 test images]# ls -lhZa /rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/33db5688-dafe-40ab-9dd0-a826a90c3793 drwxr-xr-x vdsm kvm ? . drwxr-xr-x vdsm kvm ? .. -rw-rw---- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2 -rw-rw---- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2.lease -rw-r--r-- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2.meta -rw-rw---- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6 -rw-rw---- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6.lease -rw-r--r-- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6.meta bug submitted: https://bugzilla.redhat.com/show_bug.cgi?id=1440198 Thank you! --------------705BA3C205C526559706549D Content-Type: text/html; charset="utf-8" Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <br> <br> <div class="moz-cite-prefix">On 4/7/17 12:52 AM, Nir Soffer wrote:<br> </div> <blockquote cite="mid:CAMRbyyvG1X4-npxFhsJd6dgmzB6mqr+JoFN08U2aKVFsBZckgA@mail.gmail.com" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <div dir="ltr"> <div class="gmail_quote"> <div dir="ltr">On Fri, Apr 7, 2017 at 2:40 AM Bill James <<a moz-do-not-send="true" href="mailto:bill.james@j2.com">bill.james@j2.com</a>> wrote:<br> </div> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We are trying to convert our qa environment from local nfs to gluster.<br class="gmail_msg"> When I move a disk with a VM that is running on same server as the<br class="gmail_msg"> storage it fails.<br class="gmail_msg"> When I move a disk with VM running on a different system it works.<br class="gmail_msg"> <br class="gmail_msg"> VM running on same system as disk:<br class="gmail_msg"> <br class="gmail_msg"> 2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]<br class="gmail_msg"> (vmId='e598485a-dc74-43f7-8447-e00ac44dae21') Unable to start<br class="gmail_msg"> replication for vda to {u'domainID':<br class="gmail_msg"> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volumeInfo': {'domainID':<br class="gmail_msg"> u'6affd8c3-2c<br class="gmail_msg"> 51-4cd1-8300-bfbbb14edbe9', 'volType': 'path', 'leaseOffset': 0, 'path':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',<br class="gmail_msg"> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',<br class="gmail_msg"> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, 'diskType': 'file',<br class="gmail_msg"> 'format': 'cow', 'cache': 'none', u'volumeID':<br class="gmail_msg"> u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', u'imageID':<br class="gmail_msg"> u'7ae9b3f7-3507-4469-a080-d0944d0ab753', u'poolID':<br class="gmail_msg"> u'8b6303b3-79c6-4633-ae21-71b15ed00675', u'device': 'disk', 'path':<br class="gmail_msg"> u'/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',<br class="gmail_msg"> 'propagateErrors': u'off', 'volumeChain': [{'domainID':<br class="gmail_msg"> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',<br class="gmail_msg"> 'leaseOffset': 0, 'path':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d',<br class="gmail_msg"> 'volumeID': u'6756eb05-6803-42a7-a3a2-10233bf2ca8d', 'leasePath':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/6756eb05-6803-42a7-a3a2-10233bf2ca8d.lease',<br class="gmail_msg"> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}, {'domainID':<br class="gmail_msg"> u'6affd8c3-2c51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',<br class="gmail_msg"> 'leaseOffset': 0, 'path':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5',<br class="gmail_msg"> 'volumeID': u'30fd46c9-c738-4b13-aeca-3dc9ffc677f5', 'leasePath':<br class="gmail_msg"> u'/rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5.lease',<br class="gmail_msg"> 'imageID': u'7ae9b3f7-3507-4469-a080-d0944d0ab753'}]} (vm:3594)<br class="gmail_msg"> Traceback (most recent call last):<br class="gmail_msg"> File "/usr/share/vdsm/virt/vm.py", line 3588, in diskReplicateStart<br class="gmail_msg"> self._startDriveReplication(drive)<br class="gmail_msg"> File "/usr/share/vdsm/virt/vm.py", line 3713, in _startDriveReplication<br class="gmail_msg"> self._dom.blockCopy(<a moz-do-not-send="true" href="http://drive.name" rel="noreferrer" class="gmail_msg" target="_blank">drive.name</a>, destxml, flags=flags)<br class="gmail_msg"> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line<br class="gmail_msg"> 69, in f<br class="gmail_msg"> ret = attr(*args, **kwargs)<br class="gmail_msg"> File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",<br class="gmail_msg"> line 123, in wrapper<br class="gmail_msg"> ret = f(*args, **kwargs)<br class="gmail_msg"> File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 941, in<br class="gmail_msg"> wrapper<br class="gmail_msg"> return func(inst, *args, **kwargs)<br class="gmail_msg"> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 684, in<br class="gmail_msg"> blockCopy<br class="gmail_msg"> if ret == -1: raise libvirtError ('virDomainBlockCopy() failed',<br class="gmail_msg"> dom=self)<br class="gmail_msg"> libvirtError: internal error: unable to execute QEMU command<br class="gmail_msg"> 'drive-mirror': Could not open<br class="gmail_msg"> '/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5':<br class="gmail_msg"> Permission denied<br class="gmail_msg"> <br class="gmail_msg"> <br class="gmail_msg"> [root@ovirt1 test vdsm]# ls -l<br class="gmail_msg"> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5<br class="gmail_msg"> -rw-rw---- 2 vdsm kvm 197120 Apr 6 13:29<br class="gmail_msg"> /rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/7ae9b3f7-3507-4469-a080-d0944d0ab753/30fd46c9-c738-4b13-aeca-3dc9ffc677f5<br class="gmail_msg"> <br class="gmail_msg"> <br class="gmail_msg"> <br class="gmail_msg"> Then if I try and rerun it it says, even though move failed:<br class="gmail_msg"> <br class="gmail_msg"> 2017-04-06 13:49:27,197 INFO (jsonrpc/1) [dispatcher] Run and protect:<br class="gmail_msg"> getAllTasksStatuses, Return response: {'allT<br class="gmail_msg"> asksStatus': {'078d962c-e682-40f9-a177-2a8b479a7d8b': {'code': 212,<br class="gmail_msg"> 'message': 'Volume already exists', 'taskState':<br class="gmail_msg"> 'finished', 'taskResult': 'cleanSuccess', 'taskID':<br class="gmail_msg"> '078d962c-e682-40f9-a177-2a8b479a7d8b'}}} (logUtils:52)<br class="gmail_msg"> <br class="gmail_msg"> <br class="gmail_msg"> So now I have to clean up the disks that it failed to move so I can<br class="gmail_msg"> migrate the VM and then move the disk again.<br class="gmail_msg"> Or so it seems.<br class="gmail_msg"> Failed move disks do exist in new location, even though it "failed".<br class="gmail_msg"> <br class="gmail_msg"> vdsm.log attached.<br class="gmail_msg"> <br class="gmail_msg"> ovirt-engine-tools-4.1.0.4-1.el7.centos.noarch<br class="gmail_msg"> vdsm-4.19.4-1.el7.centos.x86_64<br class="gmail_msg"> </blockquote> <div><br> </div> <div>Hi Bill,</div> <div><br> </div> <div>Does it work after setting selinux to permissive? (setenforce 0)</div> <div><br> </div> <div>Can you share output of:</div> <div><br> </div> <div>ps -efZ | grep vm-name<br> </div> <div>(filter the specific vm) <br> </div> <div><br> </div> <div>ls -lhZ /rhev/data-center/mnt</div> <div><br> </div> <div>ls -lhZ /rhev/data-center/mnt/gluster-server:_path/sd_id/images/img_id/vol_id</div> <div>(assuming the volume was not deleted after the operation).</div> <div><br> </div> <div>If the volume is not deleted after the failed move disk operation, this is likely</div> <div>a bug, please file a bug for this.</div> <div><br> </div> <div>The actual failure may be gluster configuration issue, or selinux related bug.</div> <div><br> </div> <div>Nir</div> <div> <br> </div> </div> </div> </blockquote> <br> SELinux status: disabled<br> <br> [root@ovirt1 test images]# ps -efZ | grep darmaster<br> - root 5272 1 6 Mar17 ? 1-09:08:58 /usr/libexec/qemu-kvm -name guest=darmaster1.test.j2noc.com,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off -cpu Nehalem -m size=1048576k,slots=16,maxmem=4194304k -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0,mem=1024 -uuid 73368460-92e1-4c9e-a162-399304f1c462 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-3.1611.el7.centos,serial=30343536-3138-584D-5134-343430313833,uuid=73368460-92e1-4c9e-a162-399304f1c462 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-darmaster1.test.j2no/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2017-03-17T23:57:54,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/8b6303b3-79c6-4633-ae21-71b15ed00675/7e566f55-e060-47b7-bfa4-ac3c48d70dda/images/33db5688-dafe-40ab-9dd0-a826a90c3793/38de110d-464c-4735-97ba-3d623ee1a1b6,format=raw,if=none,id=drive-virtio-disk0,serial=33db5688-dafe-40ab-9dd0-a826a90c3793,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=38,id=hostnet0,vhost=on,vhostfd=40 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:69,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/73368460-92e1-4c9e-a162-399304f1c462.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 10.176.30.96:6,password -k en-us -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on<br> <br> [root@ovirt1 test ~]# ls -lhZ /rhev/data-center/mnt<br> drwxr-xr-x vdsm kvm ? glusterSD<br> drwxr-xr-x vdsm kvm ? netappqa3:_vol_cloud__images_ovirt__QA__ISOs<br> drwxr-xr-x vdsm kvm ? netappqa3:_vol_cloud__storage1_ovirt__qa__inside<br> drwxr-xr-x vdsm kvm ? ovirt1-ks.test.j2noc.com:_ovirt-store_nfs1<br> drwxr-xr-x vdsm kvm ? ovirt2-ks.test.j2noc.com:_ovirt-store_nfs2<br> drwxr-xr-x vdsm kvm ? ovirt2-ks.test.j2noc.com:_ovirt-store_nfs-2<br> drwxr-xr-x vdsm kvm ? ovirt3-ks.test.j2noc.com:_ovirt-store_nfs<br> drwxr-xr-x vdsm kvm ? ovirt4-ks.test.j2noc.com:_ovirt-store_nfs<br> drwxr-xr-x vdsm kvm ? ovirt5-ks.test.j2noc.com:_ovirt-store_nfs<br> drwxr-xr-x vdsm kvm ? ovirt6-ks.test.j2noc.com:_ovirt-store_nfs<br> drwxr-xr-x vdsm kvm ? ovirt7-ks.test.j2noc.com:_ovirt-store_nfs<br> drwxr-xr-x vdsm kvm ? qagenfil1-nfs1:_ovirt__inside_Export<br> drwxr-xr-x vdsm kvm ? qagenfil1-nfs1:_ovirt__inside_images<br> <br> <br> [root@ovirt1 test images]# ls -lhZa /rhev/data-center/mnt/glusterSD/ovirt1-ks.test.j2noc.com:_gv2/6affd8c3-2c51-4cd1-8300-bfbbb14edbe9/images/33db5688-dafe-40ab-9dd0-a826a90c3793<br> drwxr-xr-x vdsm kvm ? .<br> drwxr-xr-x vdsm kvm ? ..<br> -rw-rw---- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2<br> -rw-rw---- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2.lease<br> -rw-r--r-- vdsm kvm ? 33c04305-efbe-418a-b42c-07f5f76214f2.meta<br> -rw-rw---- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6<br> -rw-rw---- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6.lease<br> -rw-r--r-- vdsm kvm ? 38de110d-464c-4735-97ba-3d623ee1a1b6.meta<br> <br> <br> bug submitted: <a class="moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1440198">https://bugzilla.redhat.com/show_bug.cgi?id=1440198</a><br> <br> <br> Thank you!<br> </body> </html> --------------705BA3C205C526559706549D--