<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 7, 2017 at 5:29 PM, Bill James <span dir="ltr"><<a href="mailto:bill.james@j2.com" target="_blank">bill.james@j2.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div><div class="h5">
<br>
<br>
<div class="m_-7375313093972664498moz-cite-prefix">On 4/7/17 12:52 AM, Nir Soffer wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div class="gmail_quote">
<div dir="ltr">On Fri, Apr 7, 2017 at 2:40 AM Bill James <<a href="mailto:bill.james@j2.com" target="_blank">bill.james@j2.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">We are
trying to convert our qa environment from local nfs to
gluster.<br class="m_-7375313093972664498gmail_msg">
When I move a disk with a VM that is running on same server
as the<br class="m_-7375313093972664498gmail_msg">
storage it fails.<br class="m_-7375313093972664498gmail_msg">
When I move a disk with VM running on a different system it
works.<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
VM running on same system as disk:<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
2017-04-06 13:31:00,588 ERROR (jsonrpc/6) [virt.vm]<br class="m_-7375313093972664498gmail_msg">
(vmId='e598485a-dc74-43f7-<wbr>8447-e00ac44dae21') Unable to
start<br class="m_-7375313093972664498gmail_msg">
replication for vda to {u'domainID':<br class="m_-7375313093972664498gmail_msg">
u'6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9', 'volumeInfo':
{'domainID':<br class="m_-7375313093972664498gmail_msg">
u'6affd8c3-2c<br class="m_-7375313093972664498gmail_msg">
51-4cd1-8300-bfbbb14edbe9', 'volType': 'path',
'leaseOffset': 0, 'path':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/30fd46c9-c738-<wbr>4b13-aeca-3dc9ffc677f5',<br class="m_-7375313093972664498gmail_msg">
'volumeID': u'30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5',
'leasePath':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/30fd46c9-c738-<wbr>4b13-aeca-3dc9ffc677f5.lease',<br class="m_-7375313093972664498gmail_msg">
'imageID': u'7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753'},
'diskType': 'file',<br class="m_-7375313093972664498gmail_msg">
'format': 'cow', 'cache': 'none', u'volumeID':<br class="m_-7375313093972664498gmail_msg">
u'30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5', u'imageID':<br class="m_-7375313093972664498gmail_msg">
u'7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753', u'poolID':<br class="m_-7375313093972664498gmail_msg">
u'8b6303b3-79c6-4633-ae21-<wbr>71b15ed00675', u'device': 'disk',
'path':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/8b6303b3-<wbr>79c6-4633-ae21-71b15ed00675/<wbr>6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9/images/7ae9b3f7-<wbr>3507-4469-a080-d0944d0ab753/<wbr>30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5',<br class="m_-7375313093972664498gmail_msg">
'propagateErrors': u'off', 'volumeChain': [{'domainID':<br class="m_-7375313093972664498gmail_msg">
u'6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9', 'volType': 'path',<br class="m_-7375313093972664498gmail_msg">
'leaseOffset': 0, 'path':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/6756eb05-6803-<wbr>42a7-a3a2-10233bf2ca8d',<br class="m_-7375313093972664498gmail_msg">
'volumeID': u'6756eb05-6803-42a7-a3a2-<wbr>10233bf2ca8d',
'leasePath':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/6756eb05-6803-<wbr>42a7-a3a2-10233bf2ca8d.lease',<br class="m_-7375313093972664498gmail_msg">
'imageID': u'7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753'},
{'domainID':<br class="m_-7375313093972664498gmail_msg">
u'6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9', 'volType': 'path',<br class="m_-7375313093972664498gmail_msg">
'leaseOffset': 0, 'path':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/30fd46c9-c738-<wbr>4b13-aeca-3dc9ffc677f5',<br class="m_-7375313093972664498gmail_msg">
'volumeID': u'30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5',
'leasePath':<br class="m_-7375313093972664498gmail_msg">
u'/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753/30fd46c9-c738-<wbr>4b13-aeca-3dc9ffc677f5.lease',<br class="m_-7375313093972664498gmail_msg">
'imageID': u'7ae9b3f7-3507-4469-a080-<wbr>d0944d0ab753'}]}
(vm:3594)<br class="m_-7375313093972664498gmail_msg">
Traceback (most recent call last):<br class="m_-7375313093972664498gmail_msg">
File "/usr/share/vdsm/virt/vm.py", line 3588, in
diskReplicateStart<br class="m_-7375313093972664498gmail_msg">
self._startDriveReplication(<wbr>drive)<br class="m_-7375313093972664498gmail_msg">
File "/usr/share/vdsm/virt/vm.py", line 3713, in
_startDriveReplication<br class="m_-7375313093972664498gmail_msg">
self._dom.blockCopy(<a href="http://drive.name" rel="noreferrer" class="m_-7375313093972664498gmail_msg" target="_blank">drive.<wbr>name</a>, destxml,
flags=flags)<br class="m_-7375313093972664498gmail_msg">
File
"/usr/lib/python2.7/site-<wbr>packages/vdsm/virt/virdomain.<wbr>py",
line<br class="m_-7375313093972664498gmail_msg">
69, in f<br class="m_-7375313093972664498gmail_msg">
ret = attr(*args, **kwargs)<br class="m_-7375313093972664498gmail_msg">
File
"/usr/lib/python2.7/site-<wbr>packages/vdsm/<wbr>libvirtconnection.py",<br class="m_-7375313093972664498gmail_msg">
line 123, in wrapper<br class="m_-7375313093972664498gmail_msg">
ret = f(*args, **kwargs)<br class="m_-7375313093972664498gmail_msg">
File "/usr/lib/python2.7/site-<wbr>packages/vdsm/utils.py",
line 941, in<br class="m_-7375313093972664498gmail_msg">
wrapper<br class="m_-7375313093972664498gmail_msg">
return func(inst, *args, **kwargs)<br class="m_-7375313093972664498gmail_msg">
File "/usr/lib64/python2.7/site-<wbr>packages/libvirt.py",
line 684, in<br class="m_-7375313093972664498gmail_msg">
blockCopy<br class="m_-7375313093972664498gmail_msg">
if ret == -1: raise libvirtError ('virDomainBlockCopy()
failed',<br class="m_-7375313093972664498gmail_msg">
dom=self)<br class="m_-7375313093972664498gmail_msg">
libvirtError: internal error: unable to execute QEMU command<br class="m_-7375313093972664498gmail_msg">
'drive-mirror': Could not open<br class="m_-7375313093972664498gmail_msg">
'/rhev/data-center/8b6303b3-<wbr>79c6-4633-ae21-71b15ed00675/<wbr>6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9/images/7ae9b3f7-<wbr>3507-4469-a080-d0944d0ab753/<wbr>30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5':<br class="m_-7375313093972664498gmail_msg">
Permission denied<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
[root@ovirt1 test vdsm]# ls -l<br class="m_-7375313093972664498gmail_msg">
/rhev/data-center/8b6303b3-<wbr>79c6-4633-ae21-71b15ed00675/<wbr>6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9/images/7ae9b3f7-<wbr>3507-4469-a080-d0944d0ab753/<wbr>30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5<br class="m_-7375313093972664498gmail_msg">
-rw-rw---- 2 vdsm kvm 197120 Apr 6 13:29<br class="m_-7375313093972664498gmail_msg">
/rhev/data-center/8b6303b3-<wbr>79c6-4633-ae21-71b15ed00675/<wbr>6affd8c3-2c51-4cd1-8300-<wbr>bfbbb14edbe9/images/7ae9b3f7-<wbr>3507-4469-a080-d0944d0ab753/<wbr>30fd46c9-c738-4b13-aeca-<wbr>3dc9ffc677f5<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
Then if I try and rerun it it says, even though move failed:<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
2017-04-06 13:49:27,197 INFO (jsonrpc/1) [dispatcher] Run
and protect:<br class="m_-7375313093972664498gmail_msg">
getAllTasksStatuses, Return response: {'allT<br class="m_-7375313093972664498gmail_msg">
asksStatus': {'078d962c-e682-40f9-a177-<wbr>2a8b479a7d8b':
{'code': 212,<br class="m_-7375313093972664498gmail_msg">
'message': 'Volume already exists', 'taskState':<br class="m_-7375313093972664498gmail_msg">
'finished', 'taskResult': 'cleanSuccess', 'taskID':<br class="m_-7375313093972664498gmail_msg">
'078d962c-e682-40f9-a177-<wbr>2a8b479a7d8b'}}} (logUtils:52)<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
So now I have to clean up the disks that it failed to move
so I can<br class="m_-7375313093972664498gmail_msg">
migrate the VM and then move the disk again.<br class="m_-7375313093972664498gmail_msg">
Or so it seems.<br class="m_-7375313093972664498gmail_msg">
Failed move disks do exist in new location, even though it
"failed".<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
vdsm.log attached.<br class="m_-7375313093972664498gmail_msg">
<br class="m_-7375313093972664498gmail_msg">
ovirt-engine-tools-4.1.0.4-1.<wbr>el7.centos.noarch<br class="m_-7375313093972664498gmail_msg">
vdsm-4.19.4-1.el7.centos.x86_<wbr>64<br class="m_-7375313093972664498gmail_msg">
</blockquote>
<div><br>
</div>
<div>Hi Bill,</div>
<div><br>
</div>
<div>Does it work after setting selinux to permissive?
(setenforce 0)</div>
<div><br>
</div>
<div>Can you share output of:</div>
<div><br>
</div>
<div>ps -efZ | grep vm-name<br>
</div>
<div>(filter the specific vm) <br>
</div>
<div><br>
</div>
<div>ls -lhZ /rhev/data-center/mnt</div>
<div><br>
</div>
<div>ls -lhZ
/rhev/data-center/mnt/gluster-<wbr>server:_path/sd_id/images/img_<wbr>id/vol_id</div>
<div>(assuming the volume was not deleted after the
operation).</div>
<div><br>
</div>
<div>If the volume is not deleted after the failed move disk
operation, this is likely</div>
<div>a bug, please file a bug for this.</div>
<div><br>
</div>
<div>The actual failure may be gluster configuration issue, or
selinux related bug.</div>
<div><br>
</div>
<div>Nir</div>
<div> <br>
</div>
</div>
</div>
</blockquote>
<br></div></div>
SELinux status: disabled<br></div></blockquote><div><br></div><div>This is a less tested configuration. We usually run with selinux enabled.</div><div>Y.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<br>
[root@ovirt1 test images]# ps -efZ | grep darmaster<br>
- <wbr> root 5272 1 6 Mar17
? 1-09:08:58 /usr/libexec/qemu-kvm -name
guest=<a href="http://darmaster1.test.j2noc.com" target="_blank">darmaster1.test.j2noc.<wbr>com</a>,debug-threads=on -S -object
secret,id=masterKey0,format=<wbr>raw,file=/var/lib/libvirt/<wbr>qemu/domain-13-darmaster1.<wbr>test.j2no/master-key.aes
-machine pc-i440fx-rhel7.3.0,accel=kvm,<wbr>usb=off -cpu Nehalem -m
size=1048576k,slots=16,maxmem=<wbr>4194304k -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=<wbr>1,threads=1 -numa
node,nodeid=0,cpus=0,mem=1024 -uuid
73368460-92e1-4c9e-a162-<wbr>399304f1c462 -smbios
type=1,manufacturer=oVirt,<wbr>product=oVirt
Node,version=7-3.1611.el7.<wbr>centos,serial=30343536-3138-<wbr>584D-5134-343430313833,uuid=<wbr>73368460-92e1-4c9e-a162-<wbr>399304f1c462
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/<wbr>var/lib/libvirt/qemu/domain-<wbr>13-darmaster1.test.j2no/<wbr>monitor.sock,server,nowait
-mon chardev=charmonitor,id=<wbr>monitor,mode=control -rtc
base=2017-03-17T23:57:54,<wbr>driftfix=slew -global
kvm-pit.lost_tick_policy=<wbr>discard -no-hpet -no-shutdown -boot
strict=on -device piix3-usb-uhci,id=usb,bus=pci.<wbr>0,addr=0x1.0x2
-device virtio-scsi-pci,id=scsi0,bus=<wbr>pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-<wbr>serial0,max_ports=16,bus=pci.<wbr>0,addr=0x5
-drive if=none,id=drive-ide0-1-0,<wbr>readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=<wbr>drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/<wbr>8b6303b3-79c6-4633-ae21-<wbr>71b15ed00675/7e566f55-e060-<wbr>47b7-bfa4-ac3c48d70dda/images/<wbr>33db5688-dafe-40ab-9dd0-<wbr>a826a90c3793/38de110d-464c-<wbr>4735-97ba-3d623ee1a1b6,format=<wbr>raw,if=none,id=drive-virtio-<wbr>disk0,serial=33db5688-dafe-<wbr>40ab-9dd0-a826a90c3793,cache=<wbr>none,werror=stop,rerror=stop,<wbr>aio=threads
-device
virtio-blk-pci,scsi=off,bus=<wbr>pci.0,addr=0x6,drive=drive-<wbr>virtio-disk0,id=virtio-disk0,<wbr>bootindex=1
-netdev tap,fd=38,id=hostnet0,vhost=<wbr>on,vhostfd=40 -device
virtio-net-pci,netdev=<wbr>hostnet0,id=net0,mac=00:1a:4a:<wbr>16:01:69,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>73368460-92e1-4c9e-a162-<wbr>399304f1c462.com.redhat.rhevm.<wbr>vdsm,server,nowait
-device
virtserialport,bus=virtio-<wbr>serial0.0,nr=1,chardev=<wbr>charchannel0,id=channel0,name=<wbr>com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/<wbr>var/lib/libvirt/qemu/channels/<wbr>73368460-92e1-4c9e-a162-<wbr>399304f1c462.org.qemu.guest_<wbr>agent.0,server,nowait
-device
virtserialport,bus=virtio-<wbr>serial0.0,nr=2,chardev=<wbr>charchannel1,id=channel1,name=<wbr>org.qemu.guest_agent.0
-device usb-tablet,id=input0,bus=usb.<wbr>0,port=1 -vnc
<a href="http://10.176.30.96:6" target="_blank">10.176.30.96:6</a>,password -k en-us -device
VGA,id=video0,vgamem_mb=16,<wbr>bus=pci.0,addr=0x2 -msg timestamp=on<br>
<br>
[root@ovirt1 test ~]# ls -lhZ /rhev/data-center/mnt<br>
drwxr-xr-x vdsm kvm ? <wbr> glusterSD<br>
drwxr-xr-x vdsm kvm ? <wbr>
netappqa3:_vol_cloud__images_<wbr>ovirt__QA__ISOs<br>
drwxr-xr-x vdsm kvm ? <wbr>
netappqa3:_vol_cloud__<wbr>storage1_ovirt__qa__inside<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt1-ks.test.j2noc.com:_<wbr>ovirt-store_nfs1<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt2-ks.test.j2noc.com:_<wbr>ovirt-store_nfs2<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt2-ks.test.j2noc.com:_<wbr>ovirt-store_nfs-2<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt3-ks.test.j2noc.com:_<wbr>ovirt-store_nfs<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt4-ks.test.j2noc.com:_<wbr>ovirt-store_nfs<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt5-ks.test.j2noc.com:_<wbr>ovirt-store_nfs<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt6-ks.test.j2noc.com:_<wbr>ovirt-store_nfs<br>
drwxr-xr-x vdsm kvm ? <wbr>
ovirt7-ks.test.j2noc.com:_<wbr>ovirt-store_nfs<br>
drwxr-xr-x vdsm kvm ? <wbr>
qagenfil1-nfs1:_ovirt__inside_<wbr>Export<br>
drwxr-xr-x vdsm kvm ? <wbr>
qagenfil1-nfs1:_ovirt__inside_<wbr>images<br>
<br>
<br>
[root@ovirt1 test images]# ls -lhZa
/rhev/data-center/mnt/<wbr>glusterSD/ovirt1-ks.test.<wbr>j2noc.com:_gv2/6affd8c3-2c51-<wbr>4cd1-8300-bfbbb14edbe9/images/<wbr>33db5688-dafe-40ab-9dd0-<wbr>a826a90c3793<br>
drwxr-xr-x vdsm kvm ? <wbr> .<br>
drwxr-xr-x vdsm kvm ? <wbr> ..<br>
-rw-rw---- vdsm kvm ? <wbr>
33c04305-efbe-418a-b42c-<wbr>07f5f76214f2<br>
-rw-rw---- vdsm kvm ? <wbr>
33c04305-efbe-418a-b42c-<wbr>07f5f76214f2.lease<br>
-rw-r--r-- vdsm kvm ? <wbr>
33c04305-efbe-418a-b42c-<wbr>07f5f76214f2.meta<br>
-rw-rw---- vdsm kvm ? <wbr>
38de110d-464c-4735-97ba-<wbr>3d623ee1a1b6<br>
-rw-rw---- vdsm kvm ? <wbr>
38de110d-464c-4735-97ba-<wbr>3d623ee1a1b6.lease<br>
-rw-r--r-- vdsm kvm ? <wbr>
38de110d-464c-4735-97ba-<wbr>3d623ee1a1b6.meta<br>
<br>
<br>
bug submitted: <a class="m_-7375313093972664498moz-txt-link-freetext" href="https://bugzilla.redhat.com/show_bug.cgi?id=1440198" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1440198</a><br>
<br>
<br>
Thank you!<br>
</div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>