Sure. I will check this up soon.
Regards
Vh
On Tue, Nov 17, 2020 at 12:18 PM Sahina Bose <sabose(a)redhat.com> wrote:
+Gobinda Das <godas(a)redhat.com>
+Ritesh Chikatwar <rchikatw(a)redhat.com> +Vinayakswami Hariharmath
<vharihar(a)redhat.com>
Vinayak, could you look at this?
On Mon, Nov 16, 2020 at 3:10 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
> On Sun, Nov 15, 2020 at 10:27 PM <suporte(a)logicworks.pt> wrote:
> >
> > So, you think it's really a bug?
>
> I'm pretty sure this is a bug on gluster side.
>
> >
> > ________________________________
> > De: "Nir Soffer" <nsoffer(a)redhat.com>
> > Para: suporte(a)logicworks.pt
> > Cc: "users" <users(a)ovirt.org>, "Sahina Bose"
<sabose(a)redhat.com>,
> "Krutika Dhananjay" <kdhananj(a)redhat.com>, "Nisan, Tal"
<
> tnisan(a)redhat.com>
> > Enviadas: Domingo, 15 De Novembro de 2020 15:03:21
> > Assunto: Re: [ovirt-users] Cannot copy or move disks
> >
> > On Sat, Nov 14, 2020 at 4:45 PM <suporte(a)logicworks.pt> wrote:
> > >
> > > Hello,
> > >
> > > I just update to Version 4.4.3.11-1.el8. Engine and host
> > >
> > > and now I cannot copy or move disks.
> > >
> > > Storage domains are glusterfs
> > >
> > > # gluster --version
> > > glusterfs 7.8
> > >
> > > Here is what I found on vdsm.log
> > >
> > > 2020-11-14 14:08:16,917+0000 INFO (tasks/5) [storage.SANLock]
> Releasing Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
> path='/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> > >
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
> offset=0) (clusterlock:530)
> > > 2020-11-14 14:08:17,015+0000 INFO (tasks/5) [storage.SANLock]
> Successfully released Lease(name='01178644-2ad6-4d37-8657-f33f547bee6b',
> path='/rhev/data-center/mnt/glusterSD/node1
> > >
-teste.acloud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b.lease',
> offset=0) (clusterlock:540)
> > > 2020-11-14 14:08:17,016+0000 ERROR (tasks/5) [root] Job
> '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' failed (jobs:223)
> > > Traceback (most recent call last):
> > > File "/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159,
in
> run
> > > self._run()
> > > File
> "/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/copy_data.py", line
> 110, in _run
> > > self._operation.run()
> > > File
"/usr/lib/python3.6/site-packages/vdsm/storage/qemuimg.py",
> line 374, in run
> > > for data in self._operation.watch():
> > > File
"/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
> line 106, in watch
> > > self._finalize(b"", err)
> > > File
"/usr/lib/python3.6/site-packages/vdsm/storage/operation.py",
> line 179, in _finalize
> > > raise cmdutils.Error(self._cmd, rc, out, err)
> > > vdsm.common.cmdutils.Error: Command ['/usr/bin/qemu-img',
'convert',
> '-p', '-t', 'none', '-T', 'none',
'-f', 'raw', '-O', 'raw',
> '/rhev/data-center/mnt/glusterSD/node1-teste.aclou
> > >
d.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/789f6e50-b954-4dda-a6d5-077fdfb357d2/d95a3e83-74d2-40a6-9f8f-e6ae68794051',
> '/rhev/data-center/mnt/glusterSD/node1-teste.ac
> > >
loud.pt:_data2/83f8bbfd-cfa3-46d9-a823-c36054826d13/images/97977cbf-eecc-4476-a11f-7798425d40c4/01178644-2ad6-4d37-8657-f33f547bee6b']
> failed with rc=1 out=b'' err=bytearray(b'qem
> > > u-img: error while reading sector 260177858: No such file or
> directory\n')
> >
> > This is an impossible error for read(), preadv() etc.
> >
> > > 2020-11-14 14:08:17,017+0000 INFO (tasks/5) [root] Job
> '8cd732fc-d69b-4c32-8b35-e4a8e47396fb' will be deleted in 3600 seconds
> (jobs:251)
> > > 2020-11-14 14:08:17,017+0000 INFO (tasks/5)
> [storage.ThreadPool.WorkerThread] FINISH task
> 6cb1d496-d1ca-40b5-a488-a72982738bab (threadPool:151)
> > > 2020-11-14 14:08:17,316+0000 INFO (jsonrpc/2) [api.host] START
> getJobs(job_type='storage',
> job_ids=['8cd732fc-d69b-4c32-8b35-e4a8e47396fb'])
> from=::ffff:192.168.5.165,36616, flow
> > > _id=49320e0a-14fb-4cbb-bdfd-b2546c260bf7 (api:48)
> >
> > This was reported here a long time ago with various versions of gluster.
> > I don't think we got any response from gluster folks about it yet.
> >
> > Can you file an oVirt bug about this?
> >
> > Nir
>
>
>