host-deploy failed at Reconfigure vdsm tool
by Yedidyah Bar David
Hi all,
Tried yesterday to deploy hosted-engine on current master snapshot, it
failed during host-deploy. host-deploy log ends with:
2020-01-27 15:25:11 IST - TASK [ovirt-host-deploy-vdsm : Reconfigure vdsm
tool] **************************
/var/log/messages on the host has:
Jan 27 15:25:24 didi-centos8-host python3[12352]: detected unhandled Python
exception in '/usr/bin/vdsm-tool'
/var/log/httpd/ansible_runner_error_log on the engine has (I have to get
used to searching there, grepped all of /var/log to find it):
[Mon Jan 27 15:25:25.692397 2020] [wsgi:error] [pid 30029] cb_event_handler
event_data={u'event': u'runner_on_failed', u'uuid':
u'874e5fcc-12e8-419c-b010-d958adfc4aec', 'stdout': u'fatal: [d
idi-centos8-host.lab.eng.tlv2.redhat.com]: FAILED! => {"changed": true,
"cmd": "vdsm-tool configure --force", "delta": "0:00:15.833532", "end":
"2020-01-27 15:25:25.296212", "msg": "non-zero
return code", "rc": 1, "start": "2020-01-27 15:25:09.462680", "stderr":
"Error: Traceback (most recent call last):\\\\n File
\\\\"/usr/bin/vdsm-tool\\\\", line 209, in main\\\\n return
tool_command[cmd][\\\\"command\\\\"](*args)\\\\n File
\\\\"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\\\\", line 40,
in wrapper\\\\n func(*args, **kwargs)\\\\n File \\\\"/u
sr/lib/python3.6/site-packages/vdsm/tool/configurator.py\\\\", line 149, in
configure\\\\n service.service_start(s)\\\\n File
\\\\"/usr/lib/python3.6/site-packages/vdsm/tool/service.py\\
\\", line 193, in service_start\\\\n return _runAlts(_srvStartAlts,
srvName)\\\\n File
\\\\"/usr/lib/python3.6/site-packages/vdsm/tool/service.py\\\\", line 172,
in _runAlts\\\\n \\\\
"%s failed\\\\" % alt.__name__, out,
err)\\\\nvdsm.tool.service.ServiceOperationError: <exception str()
failed>\\\\n\\\\nDuring handling of the above exception, another exception
occurred:\\
\\n\\\\nTraceback (most recent call last):\\\\n File
\\\\"/usr/bin/vdsm-tool\\\\", line 224, in <module>\\\\n
sys.exit(main())\\\\n File \\\\"/usr/bin/vdsm-tool\\\\", line 214, in
main\
\\\n print(\\'Error: \\', e, \\'\\\\\\\\n\\', file=sys.stderr)\\\\n
File \\\\"/usr/lib/python3.6/site-packages/vdsm/tool/service.py\\\\", line
75, in __str__\\\\n return \\'\\\\\\\\n\
\'.join(s)\\\\nTypeError: sequence item 1: expected str instance, bytes
found", "stderr_lines": ["Error: Traceback (most recent call last):", "
File \\\\"/usr/bin/vdsm-tool\\\\", line 209,
in main", " return tool_command[cmd][\\\\"command\\\\"](*args)", "
File \\\\"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py\\\\",
line 40, in wrapper", " func(*args, **kwargs
)", " File
\\\\"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py\\\\", line
149, in configure", " service.service_start(s)", " File
\\\\"/usr/lib/python3.6/site-packages/vdsm/
tool/service.py\\\\", line 193, in service_start", " return
_runAlts(_srvStartAlts, srvName)", " File
\\\\"/usr/lib/python3.6/site-packages/vdsm/tool/service.py\\\\", line 172,
in _runAl
ts", " \\\\"%s failed\\\\" % alt.__name__, out, err)",
"vdsm.tool.service.ServiceOperationError: <exception str() failed>", "",
"During handling of the above exception, another exception
occurred:", "", "Traceback (most recent call last):", " File
\\\\"/usr/bin/vdsm-tool\\\\", line 224, in <module>", "
sys.exit(main())", " File \\\\"/usr/bin/vdsm-tool\\\\", line 214, in
main", " print(\\'Error: \\', e, \\'\\\\\\\\n\\', file=sys.stderr)", "
File \\\\"/usr/lib/python3.6/site-packages/vdsm/tool/service.py\\\\", line
75, in __str__", " return \\'\\\\\\\
\n\\'.join(s)", "TypeError: sequence item 1: expected str instance, bytes
found"], "stdout": "\\\\nChecking configuration status...\\\\n\\\\nCurrent
revision of multipath.conf detected, pres
erving\\\\nabrt is not configured for vdsm\\\\nlibvirtd socket units
status: [{\\'Names\\': \\'libvirtd-tls.socket\\', \\'LoadState\\':
\\'loaded\\'}, {\\'Names\\': \\'libvirtd-tcp.socket\\'
, \\'LoadState\\': \\'masked\\'}, {\\'Names\\': \\'libvirtd-ro.socket\\',
\\'LoadState\\': \\'masked\\'}, {\\'Names\\': \\'libvirtd-admin.socket\\',
\\'LoadState\\': \\'masked\\'}, {\\'Names
\\': \\'libvirtd.socket\\', \\'LoadState\\': \\'masked\\'}]\\\\nlibvirtd
doesn\\'t use systemd socket activation - one or more of its socket units
have been masked\\\\nlibvirt is not configured for vdsm yet\\\\nFAILED:
conflicting vdsm and libvirt-qemu tls configuration.\\\\nvdsm.conf with
ssl=True requires the following changes:\\\\nlibvirtd.conf: listen_tcp=0,
auth_tcp=\\\\"sasl\\\\", listen_tls=1\\\\nqemu.conf: spice_tls=1.\\\\nlvm
is configured for vdsm\\\\nManaged volume database is already
configured\\\\n\\\\nRunning configure...\\\\nReconfiguration of passwd is
done.\\\\nReconfiguration of abrt is done.\\\\nlibvirtd socket units
status: [{\\'Names\\': \\'libvirtd-tls.socket\\', \\'LoadState\\':
\\'loaded\\'}, {\\'Names\\': \\'libvirtd-tcp.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd-ro.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd-admin.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd.socket\\', \\'LoadState\\':
\\'masked\\'}]\\\\nlibvirtd doesn\\'t use systemd socket activation - one
or more of its socket units have been masked\\\\nlibvirtd socket units
status: [{\\'Names\\': \\'libvirtd-tls.socket\\', \\'LoadState\\':
\\'loaded\\'}, {\\'Names\\': \\'libvirtd-tcp.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd-ro.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd-admin.socket\\', \\'LoadState\\':
\\'masked\\'}, {\\'Names\\': \\'libvirtd.socket\\', \\'LoadState\\':
\\'masked\\'}]\\\\nlibvirtd doesn\\'t use systemd socket activation - one
or more of its socket units have been masked\\\\nReconfiguration of libvirt
is done.", "stdout_lines": ["", "Checking configuration status...", "",
"Current revision of multipath.conf detected, preserving", "abrt is not
configured for vdsm", "libvirtd socket units status: [{\\'Names\\':
\\'libvirtd-tls.socket\\', \\'LoadState\\': \\'loaded\\'}, {\\'Names\\':
\\'libvirtd-tcp.socket\\', \\'LoadState\\': \\'masked\\'}, {\\'Names\\':
\\'libvirtd-ro.socket\\', \\'LoadState\\': \\'masked\\'}, {\\'Names\\':
\\'libvirtd-admin.socket\\', \\'LoadState\\': \\'masked\\'}, {\\'Names\\':
\\'libvirtd.socket\\', \\'LoadState\\': \\'masked\\'}]", "libvirtd
doesn\\'t use systemd socket activation - one or more of its socket units
have been masked", "libvirt is not configured for vdsm yet", "FAILED:
conflicting vdsm and libvirt-qemu tls configuration.", "vdsm.conf with
ssl=True requires the following changes:", "libvirtd.conf: listen_tcp=0,
auth_tcp=\\\\"sasl\\\\", listen_tls=1", "qemu.conf: spice_tls=1.", "lvm is
configured for vdsm", "Managed volume database is already configured", "",
"Running configure...", "Reconfiguration of passwd is done.",
"Reconfiguration of abrt is done.", "libvirtd socket units status:
[{\\'Names\\': \\'libvirtd-tls.socket\\', \\'LoadState\\': \\'loaded\\'},
{\\'Names\\': \\'libvirtd-tcp.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd-ro.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd-admin.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd.socket\\', \\'LoadState\\': \\'masked\\'}]",
"libvirtd doesn\\'t use systemd socket activation - one or more of its
socket units have been masked", "libvirtd socket units status:
[{\\'Names\\': \\'libvirtd-tls.socket\\', \\'LoadState\\': \\'loaded\\'},
{\\'Names\\': \\'libvirtd-tcp.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd-ro.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd-admin.socket\\', \\'LoadState\\': \\'masked\\'},
{\\'Names\\': \\'libvirtd.socket\\', \\'LoadState\\': \\'masked\\'}]",
"libvirtd doesn\\'t use systemd socket activation - one or more of its
socket units have been masked", "Reconfiguration of libvirt is done."]}',
'counter': 40, u'pid': 31685, u'created': u'2020-01-27T13:25:25.683262',
'end_line': 35, 'runner_ident': '5b95ec42-4108-11ea-8afe-001a4a16027a',
'start_line': 34, u'event_data': {u'play_pattern': u'all', u'play': u'all',
u'event_loop': None, u'task_args': u'', u'remote_addr': u'
didi-centos8-host.lab.eng.tlv2.redhat.com', u'res': {u'stderr_lines':
[u'Error: Traceback (most recent call last):', u' File
"/usr/bin/vdsm-tool", line 209, in main', u' return
tool_command[cmd]["command"](*args)', u' File
"/usr/lib/python3.6/site-packages/vdsm/tool/__init__.py", line 40, in
wrapper', u' func(*args, **kwargs)', u' File
"/usr/lib/python3.6/site-packages/vdsm/tool/configurator.py", line 149, in
configure', u' service.service_start(s)', u' File
"/usr/lib/python3.6/site-
[Mon Jan 27 15:25:25.693655 2020] [wsgi:error] [pid 30029] cb_event_handler
event_data={u'event': u'playbook_on_stats', u'uuid':
u'2c8d843c-c1ad-41c0-940e-88354387b725', 'stdout': u'\\r\\nPLAY RECAP
*********************************************************************\\r\\
ndidi-centos8-host.lab.eng.tlv2.redhat.com : ok=11 changed=0
unreachable=0 failed=1 skipped=1 rescued=0 ignored=0
\\r\\n', 'counter': 41, u'pid': 31685, u'created':
u'2020-01-27T13:25:25.688541', 'end_line': 39, 'runner_ident':
'5b95ec42-4108-11ea-8afe-001a4a16027a', 'start_line': 35, u'event_data':
{u'ignored': {}, u'skipped': {u'didi-centos8-host.lab.eng.tlv2.redhat.com':
1}, u'ok': {u'didi-centos8-host.lab.eng.tlv2.redhat.com': 11},
u'artifact_data': {}, u'rescued': {}, u'changed': {}, u'pid': 31685,
u'dark': {}, u'playbook_uuid': u'ff508986-9930-47a2-b17b-61a2f2dae905',
u'playbook': u'ovirt-host-deploy.yml', u'failures': {u'
didi-centos8-host.lab.eng.tlv2.redhat.com': 1}, u'processed': {u'
didi-centos8-host.lab.eng.tlv2.redhat.com': 1}}, u'parent_uuid':
u'ff508986-9930-47a2-b17b-61a2f2dae905'}
If I now manually run on the host 'vdsm-tool configure --force', it does
not fail.
I am now going to try deploy again after reverting to a snapshot I took
before deploy.
Best regards,
--
Didi
1 year
Implicit Affinity Label checkbox
by scott.fitzgerald@oracle.com
Hi,
Is it possible to reach the Implicit Affinity Label checkbox through the SDK? When unchecked, the label seems to have zero effect.
Kind regards,
Scott
1 year, 1 month
Task Start and enable services failed to execute
by Yedidyah Bar David
Hi all,
Tried now 'hosted-engine --deploy' on fully updated CentOS
8/ovirt-master-snapshot machine. It failed during adding the host to
the engine. engine.log has:
2020-01-26 10:41:47,825+02 ERROR
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [11ba00a7] Host installation
failed for host 'efd6cb8a-935d-4812-b35c-3fbde5651b5a',
'didi-centos8-host.lab.eng.tlv2.redhat.com': Task Start and enable
services failed to execute:
2020-01-26 10:41:47,836+02 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [11ba00a7] START,
SetVdsStatusVDSCommand(HostName =
didi-centos8-host.lab.eng.tlv2.redhat.com,
SetVdsStatusVDSCommandParameters:{hostId='efd6cb8a-935d-4812-b35c-3fbde5651b5a',
status='InstallFailed', nonOperationalReason='NONE',
stopSpmFailureLogged='false', maintenanceReason='null'}), log id:
4f107d5d
2020-01-26 10:41:47,901+02 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-1) [11ba00a7] FINISH,
SetVdsStatusVDSCommand, return: , log id: 4f107d5d
2020-01-26 10:41:48,002+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-1) [11ba00a7] EVENT_ID:
VDS_INSTALL_FAILED(505), Host
didi-centos8-host.lab.eng.tlv2.redhat.com installation failed. Task
Start and enable services failed to execute: .
The code emitting this error seems to be in
backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/common/utils/ansible/AnsibleRunnerHTTPClient.java
:
String.format("Task %1$s failed to
execute: %2$s", task, "") // stdout, stderr?
Seems like someone considered logging also stdout/stderr but failed to
make up their minds. Is this tracked somewhere?
ovirt-host-deploy-ansible-20200126103902-didi-centos8-host.lab.eng.tlv2.redhat.com-11ba00a7.log
has:
2020-01-26 10:41:38 IST - TASK [ovirt-host-deploy-vdsm : Start and
enable services] **********************
2020-01-26 10:41:47 IST -
2020-01-26 10:41:47 IST - {
"status" : "OK",
"msg" : "",
"data" : {
"event" : "runner_on_failed",
...
"msg" : "Unable to start service vdsmd.service: Failed to
start vdsmd.service: Unit libvirtd-tcp.socket is masked.\n",
"_ansible_item_label" : "vdsmd.service"
systemctl status libvirtd-tcp.socket indeed still says it's masked. Package is:
# rpm -qif /usr/lib/systemd/system/libvirtd-tcp.socket
Name : libvirt-daemon
Version : 5.6.0
Release : 6.el8
Architecture: x86_64
Install Date: Mon 20 Jan 2020 08:23:12 AM IST
Group : Unspecified
Size : 1320922
License : LGPLv2+
Signature : RSA/SHA1, Wed 08 Jan 2020 11:06:38 AM IST, Key ID 695b5f7eff3e3445
Source RPM : libvirt-5.6.0-6.el8.src.rpm
Build Date : Wed 08 Jan 2020 11:06:04 AM IST
Build Host : copr-builder-156909441.novalocal
Relocations : (not relocatable)
URL : https://libvirt.org/
Summary : Server side daemon and supporting files for libvirt library
Description :
Server side daemon required to manage the virtualization capabilities
of recent versions of Linux. Requires a hypervisor specific sub-RPM
for specific drivers.
Known issue?
Thanks and best regards,
--
Didi
1 year, 1 month
Download Disk via SDK
by scott.fitzgerald@oracle.com
The diskService in the SDK has access to Move, Export, Remove, etc. However, I dont see a way to send a download request on the Disk. Does this functionality exist?
1 year, 1 month
type object 'LinuxBridge' has no attribute 'STP' (was: [CQ]: 66276a7 (ovirt-ansible-hosted-engine-setup) failed "ovirt-master" system tests)
by Yedidyah Bar David
Resending and adding devel.
This now happened to me again. I suspect this affects other runs. Any clue?
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/7692/
On Tue, Jan 21, 2020 at 12:34 PM Yedidyah Bar David <didi(a)redhat.com> wrote:
>
> On Tue, Jan 21, 2020 at 1:18 AM oVirt Jenkins <jenkins(a)ovirt.org> wrote:
> >
> > Change 66276a7 (ovirt-ansible-hosted-engine-setup) is probably the reason
> > behind recent system test failures in the "ovirt-master" change queue and needs
> > to be fixed.
> >
> > This change had been removed from the testing queue. Artifacts build from this
> > change will not be released until it is fixed.
> >
> > For further details about the change see:
> > https://github.com/oVirt/ovirt-ansible-hosted-engine-setup/commit/66276a7...
>
> Above change is unrelated to the failure below. How can I make CQ look
> at it again?
>
> >
> > For failed test results see:
> > https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/18189/
>
> This failed in basic suite, in 002_bootstrap.verify_add_hosts.
>
> engine.log [1] has (e.g.):
>
> 2020-01-20 17:56:33,198-05 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (EE-ManagedThreadFactory-engine-Thread-1) [66755eba] EVENT_ID:
> VDS_BROKER_COMMAND_FAILURE(10,802), VDSM
> lago-basic-suite-master-host-0 command HostSetupNetworksVDS failed:
> Internal JSON-RPC error: {'reason': "type object 'LinuxBridge' has no
> attribute 'STP'"}
>
> supervdsm log [2] has:
>
> MainProcess|jsonrpc/4::INFO::2020-01-20
> 17:56:32,430::configurator::190::root::(_setup_nmstate) Processing
> setup through nmstate
> MainProcess|jsonrpc/4::ERROR::2020-01-20
> 17:56:32,695::supervdsm_server::97::SuperVdsm.ServerCallback::(wrapper)
> Error in setupNetworks
> Traceback (most recent call last):
> File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py",
> line 95, in wrapper
> res = func(*args, **kwargs)
> File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line
> 240, in setupNetworks
> _setup_networks(networks, bondings, options, net_info)
> File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line
> 265, in _setup_networks
> networks, bondings, options, net_info, in_rollback
> File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py",
> line 154, in setup
> _setup_nmstate(networks, bondings, options, in_rollback, net_info)
> File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py",
> line 195, in _setup_nmstate
> desired_state = nmstate.generate_state(networks, bondings)
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 73, in generate_state
> networks, rconfig.networks, current_ifaces_state
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 603, in generate_state
> for netname, netattrs in six.viewitems(networks)
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 603, in <listcomp>
> for netname, netattrs in six.viewitems(networks)
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 339, in __init__
> self._create_interfaces_state()
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 430, in _create_interfaces_state
> sb_iface, vlan_iface, bridge_iface = self._create_ifaces()
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 444, in _create_ifaces
> options=self._create_bridge_options(),
> File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate.py",
> line 492, in _create_bridge_options
> LinuxBridge.STP.ENABLED: self._netconf.stp
> AttributeError: type object 'LinuxBridge' has no attribute 'STP'
>
> Perhaps that's related to recent changes adding/updating nmstate?
>
> [1] https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/18189/arti...
> [2] https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/18189/arti...
>
>
> > _______________________________________________
> > Infra mailing list -- infra(a)ovirt.org
> > To unsubscribe send an email to infra-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JTIBSBDL5IY...
>
>
>
> --
> Didi
--
Didi
1 year, 1 month
basic suite 4.3 failed in live_storage_migration
by Yedidyah Bar David
Hi all,
Please see [1][2].
lago.log [3]:
2020-01-20 11:55:16,647::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error
while running thread Thread-72
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
File "/home/jenkins/agent/workspace/ovirt-system-tests_standard-check-patch/ovirt-system-tests/basic-suite-4.3/test-scenarios/004_basic_sanity.py",
line 514, in live_storage_migration
lambda: api.follow_link(disk_service.get().storage_domains[0]).name
== SD_ISCSI_NAME
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
286, in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
273, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
252, in assert_equals_within
'%s != %s after %s seconds' % (res, value, timeout)
AssertionError: False != True after 600 seconds
Not sure, but this might be related:
engine.log [4]:
2020-01-20 06:45:13,991-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-92)
[ae1529f8-3e05-4bc9-b3bf-d058f45dfb2b] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
VmReplicateDiskFinishVDS, error = General Exception: ('Timed out
during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainGetBlockInfo)',), code = 100
vdsm.log [5]:
2020-01-20 06:45:13,940-0500 ERROR (jsonrpc/1) [api] FINISH
diskReplicateFinish error=Timed out during operation: cannot acquire
state change lock (held by monitor=remoteDispatchDomainGetBlockInfo)
(api:134)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
124, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 580, in
diskReplicateFinish
return self.vm.diskReplicateFinish(srcDisk, dstDisk)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4650,
in diskReplicateFinish
blkJobInfo = self._dom.blockJobInfo(drive.name, 0)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 108, in f
raise toe
TimeoutError: Timed out during operation: cannot acquire state change
lock (held by monitor=remoteDispatchDomainGetBlockInfo)
Looked a bit, and can't find the root cause.
Thanks and best regards,
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/7684/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[3] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[4] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[5] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
--
Didi
1 year, 1 month
migration on migration network failed
by Dominik Holler
Hello,
is live migration on ovirt-master via migration network currently expected
to work?
Currently, it fails in OST network-suite-master with
2020-01-21 22:49:20,991-0500 INFO (jsonrpc/2) [api.virt] START
migrate(params={'abortOnError': 'true', 'autoConverge': 'true', 'dst': '
192.168.201.3:54321', 'method': 'online', 'vmId':
'736dea3b-64be-427f-9ebf-d1e758b6f68e', 'src': '192.168.201.4', 'dstqemu':
'192.0.3.1', 'convergenceSchedule': {'init': [{'name': 'setDowntime',
'params': ['100']}], 'stalling': [{'limit': 1, 'action': {'name':
'setDowntime', 'params': ['150']}}, {'limit': 2, 'action': {'name':
'setDowntime', 'params': ['200']}}, {'limit': 3, 'action': {'name':
'setDowntime', 'params': ['300']}}, {'limit': 4, 'action': {'name':
'setDowntime', 'params': ['400']}}, {'limit': 6, 'action': {'name':
'setDowntime', 'params': ['500']}}, {'limit': -1, 'action': {'name':
'abort', 'params': []}}]}, 'outgoingLimit': 2, 'enableGuestEvents': True,
'tunneled': 'false', 'encrypted': False, 'compressed': 'false',
'incomingLimit': 2}) from=::ffff:192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:48)
2020-01-21 22:49:20,997-0500 INFO (jsonrpc/2) [api.virt] FINISH migrate
return={'status': {'code': 0, 'message': 'Migration in progress'},
'progress': 0} from=::ffff:192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:54)
2020-01-21 22:49:20,997-0500 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call VM.migrate succeeded in 0.01 seconds (__init__:312)
2020-01-21 22:49:21,099-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore:
acquiring (migration:405)
2020-01-21 22:49:21,099-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore: acquired
(migration:407)
2020-01-21 22:49:21,837-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Creation of destination VM
took: 0 seconds (migration:459)
2020-01-21 22:49:21,838-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') starting migration to
qemu+tls://192.168.201.3/system with miguri tcp://192.0.3.1 (migration:525)
2020-01-21 22:49:21,870-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.201.3/system: unable to
connect to server at '192.168.201.3:16514': Connection refused
(migration:278)
2020-01-21 22:49:22,816-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Failed to migrate
(migration:441)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 422,
in _regular_run
time.time(), machineParams
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 528,
in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 609,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 545,
in _perform_migration
self._migration_flags)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101,
in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1838, in
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt
URI qemu+tls://192.168.201.3/system: unable to connect to server at '
192.168.201.3:16514': Connection refused
2020-01-21 22:49:22,989-0500 INFO (jsonrpc/7) [api.host] START
getAllVmStats() from=::ffff:192.168.201.2,43782 (api:48)
2020-01-21 22:49:22,992-0500 INFO (jsonrpc/7) [throttled] Current
getAllVmStats: {'736dea3b-64be-427f-9ebf-d1e758b6f68e': 'Up'}
(throttledlog:104)
Please find details in
https://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/1245/
.
Dominik
1 year, 1 month
Promotion of Eyal Shenitzky to RHV Storage team lead
by Tal Nisan
Hello,
Today I have the pleasure of announcing Eyal Shenitzky's promotion to team
lead in the RHV Storage team.
Eyal joined our team in August 2017 and since then became an integral part
of the team and had a key role in features such as DR, Cinberlib,
Incremental Backup, LSM and among others.
For the last months Eyal also took the role of the team's scrum master and
maintaining of our team's Kanban board and tasks.
Please join me in congratulating Eyal!
1 year, 1 month