On Thu, May 27, 2021 at 6:58 PM jb <jonbae77(a)gmail.com> wrote:
Hi Liran,
here are the vdsm logs, from all 3 nodes.
Thanks!
The real error is:
2021-05-27 16:46:35,539+0200 ERROR (virt/487072f9) [storage.VolumeManifest]
[Errno 116] Stale file handle (fileVolume:172)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileVolume.py", line
170, in getMetadata
data = self.oop.readFile(metaPath, direct=True)
File "/usr/lib/python3.6/site-packages/vdsm/storage/outOfProcess.py",
line 369, in readFile
return self._ioproc.readfile(path, direct=direct)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 574,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 479,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 116] Stale file handle
2021-05-27 16:46:35,539+0200 INFO (virt/487072f9) [vdsm.api] FINISH
prepareImage error=Error while processing volume meta data:
("('/rhev/data-center/mnt/glusterSD/onode1.example.org:_vmstore/3cf83851-1cc8-4f97-8960-08a60b9e25db/images/ad23c0db-1838-4f1f-811b-2b213d3a11cd/15259a3b-1065-4fb7-bc3c-04c5f4e14479',):
[Errno 116] Stale file handle",) from=internal,
task_id=67405e50-503c-4b44-822f-4a7cea33ab84 (api:52)
2021-05-27 16:46:35,539+0200 ERROR (virt/487072f9)
[storage.TaskManager.Task] (Task='67405e50-503c-4b44-822f-4a7cea33ab84')
Unexpected error (task:880)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/fileVolume.py", line
170, in getMetadata
data = self.oop.readFile(metaPath, direct=True)
File "/usr/lib/python3.6/site-packages/vdsm/storage/outOfProcess.py",
line 369, in readFile
return self._ioproc.readfile(path, direct=direct)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 574,
in readfile
"direct": direct}, self.timeout)
File "/usr/lib/python3.6/site-packages/ioprocess/__init__.py", line 479,
in _sendCommand
raise OSError(errcode, errstr)
OSError: [Errno 116] Stale file handle
- [Errno 116] Stale file handle
I can also see you are using glusterFS, maybe they have a bug (fast looking
i saw
https://bugzilla.redhat.com/show_bug.cgi?id=1708121 which from my
understanding result in the same error to the file).
Kotresh, can you provide some information if I am right? and how to
workaround it? If not, to point on who can look into it?
Regards,
Liran.
On Thu, May 27, 2021 at 6:58 PM jb <jonbae77(a)gmail.com> wrote:
Hi Liran,
here are the vdsm logs, from all 3 nodes.
Regards
Jonathan
Am 27.05.21 um 17:21 schrieb Liran Rotenberg:
> On Thu, May 27, 2021 at 6:05 PM jb <jonbae77(a)gmail.com> wrote:
>> Hello Community,
>>
>> since I upgrade our cluster to ovirt 4.4.6.8-1.el8 I'm not able anymore
>> to create snapshots on certain VMs. For example I have two debian 10
>> VMs, from one I can make a snapshot, and from other one not.
>>
>> Both are up to date and uses the same qemu-guest-agent versions.
>>
>> I tried to create snapshots over API and on web gui, both gives the same
>> result.
>>
>> In the attachment you found a snipped from the engine.log.
> Hi,
> The error happened in VDSM (or even platform). But we need the VDSM
> log to see what is wrong.
>
> Regards,
> Liran.
>> Any help would be wonderful!
>>
>>
>> Regards,
>>
>> Jonathan
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKZYLZTC5ZS...