Hello again,
I'm quite sure that this is a bug!!!!
Using up to date CentOS Stream 8 with
vdsm-4.40.100.2-1.el8
and oVirt 4.4.10.
The VM is getting into unresponsive state during the
power down if port mirror is activated in the vNIC
profile of the VM.
The supervdsm.log on the host is showing the following
error during power off.
This was working in < 4.4.10. So I would say that
this is a bug. Where should I report this?
MainProcess|libvirt/events::DEBUG::2022-03-25
08:44:39,670::supervdsm_server::95::SuperVdsm.ServerCallback::(wrapper)
call unsetPortMirroring with ('probe_traffic',
'vnet2') {}
MainProcess|libvirt/events::DEBUG::2022-03-25
08:44:39,670::cmdutils::130::root::(exec_cmd) /sbin/tc
filter show dev probe_traffic parent ffff: (cwd None)
MainProcess|libvirt/events::DEBUG::2022-03-25
08:44:39,678::cmdutils::138::root::(exec_cmd) SUCCESS:
<err> = b''; <rc> = 0
MainProcess|libvirt/events::ERROR::2022-03-25
08:44:39,678::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
Error in unsetPortMirroring
Traceback (most recent call last):
File
"/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py",
line 97, in wrapper
res = func(*args, **kwargs)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 104, in unsetPortMirroring
acts = _delTarget(network, QDISC_INGRESS, target)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 50, in _delTarget
fs = list(filters(network, parent))
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 174, in filters
for filt in _filters(dev, parent=parent, out=out):
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 199, in _iterate
yield module.parse(tokens)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/filter.py",
line 99, in parse
data[data['kind']] = _filter_cls_parser(tokens)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/filter.py",
line 119, in _parse_u32
_parser.consume(tokens, '???')
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/_parser.py",
line 36, in consume
raise TCParseError('Found %s, expected %s' %
(found, expected))
vdsm.network.tc._parser.TCParseError: Found not_in_hw,
expected ('???',)
Best regards
Christoph
Am 25.03.22 um 07:42 schrieb Christoph Timm:
Good morning,
please note that I got a bit further with my
investigation.
It might be a bug in the VDSM as I see the following
while powering down the problematic VM:
MainProcess|libvirt/events::ERROR::2022-03-24
20:43:48,740::supervdsm_server::99::SuperVdsm.ServerCallback::(wrapper)
Error in unsetPortMirroring
Traceback (most recent call last):
File
"/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py",
line 97, in wrapper
res = func(*args, **kwargs)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 104, in unsetPortMirroring
acts = _delTarget(network, QDISC_INGRESS, target)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 50, in _delTarget
fs = list(filters(network, parent))
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 174, in filters
for filt in _filters(dev, parent=parent, out=out):
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/__init__.py",
line 199, in _iterate
yield module.parse(tokens)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/filter.py",
line 99, in parse
data[data['kind']] = _filter_cls_parser(tokens)
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/filter.py",
line 119, in _parse_u32
_parser.consume(tokens, '???')
File
"/usr/lib/python3.6/site-packages/vdsm/network/tc/_parser.py",
line 36, in consume
raise TCParseError('Found %s, expected %s' % (found,
expected))
vdsm.network.tc._parser.TCParseError: Found not_in_hw,
expected ('???',)
I already manage to find out that this is related to the
specific vNIC configuration which is using the port
mirror flag.
My plan is remove and add the port mirror from the vNIC
as the first step. After that I will recreate the vNIC.
Any recommendations?
Best regards
Christoph
Am 24.03.22 um 12:31 schrieb Christoph Timm:
Hi List,
I receive the following error while starting some of
our VMs after the upgrade to 4.4.10.
VM v4-probe is down with error. Exit message: internal
error: process exited while connecting to monitor:
2022-03-24T11:27:23.098838Z qemu-kvm: -blockdev
{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":"libvirt-3-format"}:
Failed to get "write" lock
Is another process using the image
[/rhev/data-center/mnt/lxskinner:_exports_skinner__2tb__1/28da0c79-b6f1-4740-bd24-e7bafcb62c75/images/90b28e7e-98a3-4f80-a136-c5a52c4a3e05/c1450651-b1e5-47fd-906b-8ebd62ace8a4]?.
The example disk is located on a NFS share.
Any idea how to tell oVirt that there is no other
process using this disk?
Best regards and thx
Christoph
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SIDIS42UA5DUTS3GF5W7XBQFCNVMN3YG/
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DD4UJSQXJENYYYP762Q5Z3TTHE7GKBH/
_______________________________________________