
Hello, I just update to Version 4.4.3.11-1.el8. Engine and host and now I cannot copy or move disks. Storage domains are glusterfs # gluster --version glusterfs 7.8 Here is what I found on vdsm.log 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48) -- Jose Ferradeira http://www.logicworks.pt

What happens when you run this command (from the SPM Host): sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b Best Regards, Strahil Nikolov В събота, 14 ноември 2020 г., 16:46:37 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Hello, I just update to Version 4.4.3.11-1.el8. Engine and host and now I cannot copy or move disks. Storage domains are glusterfs # gluster --version glusterfs 7.8 Here is what I found on vdsm.log 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow_id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48) -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3XZ4MHQ4UR3O4...

Here it is: # sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823- c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823- c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b qemu-img: /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33 f547bee6b: error while converting raw: Could not create file: No such file or directory Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: "users" <users@ovirt.org>, suporte@logicworks.pt Enviadas: Sábado, 14 De Novembro de 2020 22:05:53 Assunto: Re: [ovirt-users] Cannot copy or move disks What happens when you run this command (from the SPM Host): sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b Best Regards, Strahil Nikolov В събота, 14 ноември 2020 г., 16:46:37 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Hello, I just update to Version 4.4.3.11-1.el8. Engine and host and now I cannot copy or move disks. Storage domains are glusterfs # gluster --version glusterfs 7.8 Here is what I found on vdsm.log 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow_id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48) -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3XZ4MHQ4UR3O4...

Do the files really exist ? Any heals pending ? Best Regards, Strahil Nikolov В неделя, 15 ноември 2020 г., 16:24:48 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Here it is: # sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b qemu-img: /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b: error while converting raw: Could not create file: No such file or directory Thanks José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: "users" <users@ovirt.org>, suporte@logicworks.pt Enviadas: Sábado, 14 De Novembro de 2020 22:05:53 Assunto: Re: [ovirt-users] Cannot copy or move disks What happens when you run this command (from the SPM Host): sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b Best Regards, Strahil Nikolov В събота, 14 ноември 2020 г., 16:46:37 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Hello, I just update to Version 4.4.3.11-1.el8. Engine and host and now I cannot copy or move disks. Storage domains are glusterfs # gluster --version glusterfs 7.8 Here is what I found on vdsm.log 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow_id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48) -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3XZ4MHQ4UR3O4... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOPP3FOYGMHYJM...

I have an alert: Data Center Default compatibility version is 4.4, which is lower than latest available version 4.5. Please upgrade your Data Center to latest version to successfully finish upgrade of your setup. But when trying to update it to 4.5 always get this error: Host is compatible with versions (4.2,4.3,4.4) and cannot join Cluster Default which is set to version 4.5. How can I change host to 4.5? De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org> Enviadas: Domingo, 15 De Novembro de 2020 14:50:47 Assunto: Re: [ovirt-users] Re: Cannot copy or move disks Do the files really exist ? Any heals pending ? Best Regards, Strahil Nikolov В неделя, 15 ноември 2020 г., 16:24:48 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Here it is: # sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b qemu-img: /rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b: error while converting raw: Could not create file: No such file or directory Thanks José ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: "users" <users@ovirt.org>, suporte@logicworks.pt Enviadas: Sábado, 14 De Novembro de 2020 22:05:53 Assunto: Re: [ovirt-users] Cannot copy or move disks What happens when you run this command (from the SPM Host): sudo -u vdsm /usr/bin/qemu-img convert -p -t none -T none -f raw -O raw /rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051 /rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b Best Regards, Strahil Nikolov В събота, 14 ноември 2020 г., 16:46:37 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: Hello, I just update to Version 4.4.3.11-1.el8. Engine and host and now I cannot copy or move disks. Storage domains are glusterfs # gluster --version glusterfs 7.8 Here is what I found on vdsm.log 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while reading sector 260177858: No such file or directory\n') 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow_id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48) -- ________________________________ Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/C3XZ4MHQ4UR3O4... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JOPP3FOYGMHYJM...

On Sun, Nov 15, 2020 at 6:16 PM <suporte@logicworks.pt> wrote:
I have an alert:
Data Center Default compatibility version is 4.4, which is lower than latest available version 4.5. Please upgrade your Data Center to latest version to successfully finish upgrade of your setup.
But when trying to update it to 4.5 always get this error:
Host is compatible with versions (4.2,4.3,4.4) and cannot join Cluster Default which is set to version 4.5.
How can I change host to 4.5?
4.5 cluster version is available only on RHEL 8.3. When CentOS 8.3 will be released, and RHEL AV 8.3 will be built for CentOS, you will be able to upgrade your hosts. When all hosts support cluster version 4.5 you will be able to upgrade the cluster and dc. This should not be related to the qemu-img error. Nir

On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet. Can you file an oVirt bug about this? Nir

So, you think it's really a bug? De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet. Can you file an oVirt bug about this? Nir

Most probably a gluster bug. Best Regards, Strahil Nikolov В неделя, 15 ноември 2020 г., 22:31:24 Гринуич+2, suporte@logicworks.pt <suporte@logicworks.pt> написа: So, you think it's really a bug? ________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet. Can you file an oVirt bug about this? Nir _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3F3OONLMQXAFY...

On Sun, Nov 15, 2020 at 10:27 PM <suporte@logicworks.pt> wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir

+Gobinda Das <godas@redhat.com> +Ritesh Chikatwar <rchikatw@redhat.com> +Vinayakswami Hariharmath <vharihar@redhat.com> Vinayak, could you look at this? On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Nov 15, 2020 at 10:27 PM <suporte@logicworks.pt> wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>,
"Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com
Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock]
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or
Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou path='/rhev/data-center/mnt/glusterSD/node1 line 106, in watch line 179, in _finalize directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job
'8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251)
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir

Sure. I will check this up soon. Regards Vh On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose <sabose@redhat.com> wrote:
+Gobinda Das <godas@redhat.com> +Ritesh Chikatwar <rchikatw@redhat.com> +Vinayakswami Hariharmath <vharihar@redhat.com> Vinayak, could you look at this?
On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Nov 15, 2020 at 10:27 PM <suporte@logicworks.pt> wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>,
Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock]
Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py",
for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or
"Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" < tnisan@redhat.com> path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou path='/rhev/data-center/mnt/glusterSD/node1 line 374, in run line 106, in watch line 179, in _finalize directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job
'8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251)
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir

After a quick glance, we need some more information. I think it is better to open a ticket and provide the below information to verify the gluster issue 1. gluser volume info 2. gluster logs when you observe this issue. 3. Explain the scenario in the ticket when this happened. Regards Vh On Tue, Nov 17, 2020 at 12:31 PM Vinayakswami Hariharmath < vharihar@redhat.com> wrote:
Sure. I will check this up soon.
Regards Vh
On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose <sabose@redhat.com> wrote:
+Gobinda Das <godas@redhat.com> +Ritesh Chikatwar <rchikatw@redhat.com> +Vinayakswami Hariharmath <vharihar@redhat.com> Vinayak, could you look at this?
On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Nov 15, 2020 at 10:27 PM <suporte@logicworks.pt> wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>,
Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock]
Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py",
for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or
"Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" < tnisan@redhat.com> path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou path='/rhev/data-center/mnt/glusterSD/node1 line 374, in run line 106, in watch line 179, in _finalize directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job
'8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251)
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir

already opened a file Regards José De: "Vinayakswami Hariharmath" <vharihar@redhat.com> Para: "Sahina Bose" <sabose@redhat.com> Cc: "Nir Soffer" <nsoffer@redhat.com>, "Gobinda Das" <godas@redhat.com>, "Ritesh Chikatwar" <rchikatw@redhat.com>, "suporte" <suporte@logicworks.pt>, "users" <users@ovirt.org>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> Enviadas: Terça-feira, 17 De Novembro de 2020 7:29:17 Assunto: Re: [ovirt-users] Cannot copy or move disks After a quick glance, we need some more information. I think it is better to open a ticket and provide the below information to verify the gluster issue 1. gluser volume info 2. gluster logs when you observe this issue. 3. Explain the scenario in the ticket when this happened. Regards Vh On Tue, Nov 17, 2020 at 12:31 PM Vinayakswami Hariharmath < vharihar@redhat.com > wrote: Sure. I will check this up soon. Regards Vh On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose < sabose@redhat.com > wrote: BQ_BEGIN +Gobinda Das +Ritesh Chikatwar +Vinayakswami Hariharmath Vinayak, could you look at this? On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer < nsoffer@redhat.com > wrote: BQ_BEGIN On Sun, Nov 15, 2020 at 10:27 PM < suporte@logicworks.pt > wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" < nsoffer@redhat.com > Para: suporte@logicworks.pt Cc: "users" < users@ovirt.org >, "Sahina Bose" < sabose@redhat.com >, "Krutika Dhananjay" < kdhananj@redhat.com >, "Nisan, Tal" < tnisan@redhat.com > Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM < suporte@logicworks.pt > wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/ node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir
BQ_END BQ_END

Can you please provide me the link to the ticket? Regards Vh On Tue, Nov 17, 2020 at 3:32 PM <suporte@logicworks.pt> wrote:
already opened a file
Regards
José
------------------------------ *De: *"Vinayakswami Hariharmath" <vharihar@redhat.com> *Para: *"Sahina Bose" <sabose@redhat.com> *Cc: *"Nir Soffer" <nsoffer@redhat.com>, "Gobinda Das" <godas@redhat.com>, "Ritesh Chikatwar" <rchikatw@redhat.com>, "suporte" <suporte@logicworks.pt>, "users" <users@ovirt.org>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> *Enviadas: *Terça-feira, 17 De Novembro de 2020 7:29:17 *Assunto: *Re: [ovirt-users] Cannot copy or move disks
After a quick glance, we need some more information. I think it is better to open a ticket and provide the below information to verify the gluster issue
1. gluser volume info 2. gluster logs when you observe this issue. 3. Explain the scenario in the ticket when this happened.
Regards Vh
On Tue, Nov 17, 2020 at 12:31 PM Vinayakswami Hariharmath < vharihar@redhat.com> wrote:
Sure. I will check this up soon. Regards Vh
On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose <sabose@redhat.com> wrote:
+Gobinda Das <godas@redhat.com> +Ritesh Chikatwar <rchikatw@redhat.com> +Vinayakswami Hariharmath <vharihar@redhat.com> Vinayak, could you look at this?
On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Sun, Nov 15, 2020 at 10:27 PM <suporte@logicworks.pt> wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" <nsoffer@redhat.com> Para: suporte@logicworks.pt Cc: "users" <users@ovirt.org>, "Sahina Bose" <sabose@redhat.com>,
Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM <suporte@logicworks.pt> wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock]
Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py",
for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or
"Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" < tnisan@redhat.com> path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou path='/rhev/data-center/mnt/glusterSD/node1 line 374, in run line 106, in watch line 179, in _finalize directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job
'8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251)
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir

https://bugzilla.redhat.com/show_bug.cgi?id=1898207 Regards José De: "Vinayakswami Hariharmath" <vharihar@redhat.com> Para: "suporte" <suporte@logicworks.pt> Cc: "Sahina Bose" <sabose@redhat.com>, "Nir Soffer" <nsoffer@redhat.com>, "Gobinda Das" <godas@redhat.com>, "Ritesh Chikatwar" <rchikatw@redhat.com>, "users" <users@ovirt.org>, "Krutika Dhananjay" <kdhananj@redhat.com>, "Nisan, Tal" <tnisan@redhat.com> Enviadas: Terça-feira, 17 De Novembro de 2020 10:44:55 Assunto: Re: [ovirt-users] Cannot copy or move disks Can you please provide me the link to the ticket? Regards Vh On Tue, Nov 17, 2020 at 3:32 PM < suporte@logicworks.pt > wrote: already opened a file Regards José De: "Vinayakswami Hariharmath" < vharihar@redhat.com > Para: "Sahina Bose" < sabose@redhat.com > Cc: "Nir Soffer" < nsoffer@redhat.com >, "Gobinda Das" < godas@redhat.com >, "Ritesh Chikatwar" < rchikatw@redhat.com >, "suporte" < suporte@logicworks.pt >, "users" < users@ovirt.org >, "Krutika Dhananjay" < kdhananj@redhat.com >, "Nisan, Tal" < tnisan@redhat.com > Enviadas: Terça-feira, 17 De Novembro de 2020 7:29:17 Assunto: Re: [ovirt-users] Cannot copy or move disks After a quick glance, we need some more information. I think it is better to open a ticket and provide the below information to verify the gluster issue 1. gluser volume info 2. gluster logs when you observe this issue. 3. Explain the scenario in the ticket when this happened. Regards Vh On Tue, Nov 17, 2020 at 12:31 PM Vinayakswami Hariharmath < vharihar@redhat.com > wrote: BQ_BEGIN Sure. I will check this up soon. Regards Vh On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose < sabose@redhat.com > wrote: BQ_BEGIN +Gobinda Das +Ritesh Chikatwar +Vinayakswami Hariharmath Vinayak, could you look at this? On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer < nsoffer@redhat.com > wrote: BQ_BEGIN On Sun, Nov 15, 2020 at 10:27 PM < suporte@logicworks.pt > wrote:
So, you think it's really a bug?
I'm pretty sure this is a bug on gluster side.
________________________________ De: "Nir Soffer" < nsoffer@redhat.com > Para: suporte@logicworks.pt Cc: "users" < users@ovirt.org >, "Sahina Bose" < sabose@redhat.com >, "Krutika Dhananjay" < kdhananj@redhat.com >, "Nisan, Tal" < tnisan@redhat.com > Enviadas: Domingo, 15 De Novembro de 2020 15:03:21 Assunto: Re: [ovirt-users] Cannot copy or move disks
On Sat, Nov 14, 2020 at 4:45 PM < suporte@logicworks.pt > wrote:
Hello,
I just update to Version 4.4.3.11-1.el8. Engine and host
and now I cannot copy or move disks.
Storage domains are glusterfs
# gluster --version glusterfs 7.8
Here is what I found on vdsm.log
2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock] Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:530) 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock] Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b', path='/rhev/data-center/mnt/glusterSD/node1 -teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease', offset=0) (clusterlock:540) 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223) Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run self._run() File "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line 110, in _run self._operation.run() File "/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py", line 374, in run for data in self._operation.watch(): File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 106, in watch self._finalize(b"", err) File "/usr/lib/python3.6/site-packages/vdsm/storage/operation.py", line 179, in _finalize raise cmdutils.Error(self._cmd, rc, out, err) vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T', 'none', '-f', 'raw', '-O', 'raw', '/rhev/data-center/mnt/glusterSD/node1-teste.aclou d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051', '/rhev/data-center/mnt/glusterSD/ node1-teste.ac loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b'] failed with rc=1 out=b'' err=bytearray(b'qem u-img: error while reading sector 260177858: No such file or directory\n')
This is an impossible error for read(), preadv() etc.
2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds (jobs:251) 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [storage.ThreadPool.WorkerThread] FINISH task 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151) 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START getJobs(job_type='storage', job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb']) from=::ffff:192.168.5.165,36616, flow _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
This was reported here a long time ago with various versions of gluster. I don't think we got any response from gluster folks about it yet.
Can you file an oVirt bug about this?
Nir
BQ_END BQ_END BQ_END
participants (5)
-
Nir Soffer
-
Sahina Bose
-
Strahil Nikolov
-
suporte@logicworks.pt
-
Vinayakswami Hariharmath