Re: [ovirt-users] [ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by Strahil
Hi,
If the VMs are starting properly and they don't use cloudinit - then the issue is not oVirt specific, but guest specific (Linux/Windows depending on guest OS).
So your should check:
1. Does your host have any Netwprks out of sync (Host's network tab)
If yes - put the server into maintenance and fix the issue (host's network tab)
2. Check each VM's configuration if it is defined to use CloudInit -> if yes, verify that cloudinit's service is running on the guest
3. Verify each problematic guest's network settings. If needed, set a static IP and try to ping another IP from the same subnet/Vlan .
Best Regards,
Strahil NikolovOn Dec 25, 2019 11:41, lifuqiong(a)sunyainfo.com wrote:
>>
>>
>>> Dear All:
>>> My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
>>> One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
>>>
>>> the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
>>> by ovirt rest api. But the ip was changed to templated ip when the accident happened.
>>>
>>> the vm's os is centos.
>>>
>>> Hope get help from you soon, Thank you.
>>>
>>> Mark
>>> Sincerely.
4 years, 11 months
[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months
[Ovirt UI GWT ], issue with "EntityModelCheckBoxEditor"
by Prajith Kesava Prasad
Hi,
During the implementation of a checkbox editor in ovirt-engine code, im
facing an issue, the checkbox is shown in the ui fine by adding the
appropriate popup view and its corresponding java file, i have also set the
getter and setter method in the equivalent model class as well, now, im
unable to get the value change in the checkbox no matter what, i tried al
the possible ways , i tried looking at how other checkbox editor is
implemented, but no luck.
No matter whether the checkbox is marked or not marked its set as Fasle,
could someone help me with this?
Thanks& Regards,
Prajith
4 years, 11 months
[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months
[ovirt 4.2]vdsm host was shutdown unexpectly, configure of some vms was changed or lost when host was re-powered on and vm was started
by lifuqiong@sunyainfo.com
Dear All:
My ovirt engine managed two vdsms with nfs storage on another nfs server, it worked fine about three months.
One of host (host_1.3; ip:172.18.1.3) was created about 16 vms, but the host_1.3 was shutdown unexpectly about 2019-12-23 16:11, when re-started host and vms; half of the vms was lost some configure or changed , such as lost theirs ip etc.(the vm name is 'zzh_Chain49_ACG_M' in the vdsm.log)
the vm zzh_Chain49_ACG_M was create by rest API through vm's templated . the template is 20.1.1.161; the vm zzh_Chain49_ACG_M was created by ovirt rest api and changed the ip to 20.1.1.219
by ovirt rest api. But the ip was changed to templated ip when the accident happened.
the vm's os is centos.
Hope get help from you soon, Thank you.
Mark
Sincerely.
4 years, 11 months
Re: [CQ]: 105811,4 (ovirt-engine) failed "ovirt-master" system tests
by Yedidyah Bar David
On Thu, Dec 19, 2019 at 3:07 AM oVirt Jenkins <jenkins(a)ovirt.org> wrote:
>
> Change 105811,4 (ovirt-engine) is probably the reason behind recent system test
> failures in the "ovirt-master" change queue and needs to be fixed.
>
> This change had been removed from the testing queue. Artifacts build from this
> change will not be released until it is fixed.
>
> For further details about the change see:
> https://gerrit.ovirt.org/#/c/105811/4
>
> For failed test results see:
> http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/17803/
lago log [1] has:
2019-12-19 01:03:18,289::ssh.py::ssh::58::lago.ssh::DEBUG::Running
57dde5d6 on lago-basic-suite-master-engine: /usr/bin/systemctl stop
ovirt-engine
2019-12-19 01:03:20,379::ssh.py::ssh::81::lago.ssh::DEBUG::Command
57dde5d6 on lago-basic-suite-master-engine returned with 0
2019-12-19 01:03:20,381::log_utils.py::__enter__::600::lago.ssh::DEBUG::start
task:b30be6a0-81f5-4611-b638-2cc46f8dd28f:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:03:20,512::log_utils.py::__exit__::611::lago.ssh::DEBUG::end
task:b30be6a0-81f5-4611-b638-2cc46f8dd28f:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:03:20,777::ssh.py::ssh::58::lago.ssh::DEBUG::Running
5959a5f8 on lago-basic-suite-master-engine: /usr/bin/systemctl status
--lines=0 ovirt-engine
2019-12-19 01:03:20,828::ssh.py::ssh::81::lago.ssh::DEBUG::Command
5959a5f8 on lago-basic-suite-master-engine returned with 3
2019-12-19 01:03:20,828::ssh.py::ssh::89::lago.ssh::DEBUG::Command
5959a5f8 on lago-basic-suite-master-engine output:
â— ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service;
enabled; vendor preset: disabled)
Active: inactive (dead) since Wed 2019-12-18 20:03:20 EST; 455ms ago
Process: 24166
ExecStart=/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
--redirect-output --systemd=notify $EXTRA_ARGS start (code=exited,
status=0/SUCCESS)
Main PID: 24166 (code=exited, status=0/SUCCESS)
2019-12-19 01:03:20,829::log_utils.py::__enter__::600::lago.ssh::DEBUG::start
task:6b4dab7c-64f3-43e5-ae85-2fe6246d9376:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:03:20,998::log_utils.py::__exit__::611::lago.ssh::DEBUG::end
task:6b4dab7c-64f3-43e5-ae85-2fe6246d9376:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:03:21,214::ssh.py::ssh::58::lago.ssh::DEBUG::Running
599c5b96 on lago-basic-suite-master-engine: /usr/bin/systemctl start
ovirt-engine
2019-12-19 01:04:13,735::ssh.py::ssh::81::lago.ssh::DEBUG::Command
599c5b96 on lago-basic-suite-master-engine returned with 0
2019-12-19 01:04:13,737::log_utils.py::__enter__::600::lago.ssh::DEBUG::start
task:5b775d53-c53c-4d15-9cae-2806c44079b7:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:04:13,971::log_utils.py::__exit__::611::lago.ssh::DEBUG::end
task:5b775d53-c53c-4d15-9cae-2806c44079b7:Get ssh client for
lago-basic-suite-master-engine:
2019-12-19 01:04:14,488::ssh.py::ssh::58::lago.ssh::DEBUG::Running
795d5098 on lago-basic-suite-master-engine: /usr/bin/systemctl status
--lines=0 ovirt-engine
2019-12-19 01:04:15,039::ssh.py::ssh::81::lago.ssh::DEBUG::Command
795d5098 on lago-basic-suite-master-engine returned with 0
2019-12-19 01:04:15,040::ssh.py::ssh::89::lago.ssh::DEBUG::Command
795d5098 on lago-basic-suite-master-engine output:
â— ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service;
enabled; vendor preset: disabled)
Active: active (running) since Wed 2019-12-18 20:04:13 EST; 1s ago
Main PID: 5718 (ovirt-engine.py)
CGroup: /system.slice/ovirt-engine.service
├─5718 /usr/bin/python2
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py
--redirect-output --systemd=notify start
└─6073 ovirt-engine --add-modules java.se -server
-XX:+TieredCompilation -Xms1024M -Xmx1024M -Xss1M
-Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000
-Dsun.rmi.dgc.server.gcInterval=3600000
-Djsse.enableSNIExtension=false -Dresteasy.preferJacksonOverJsonB=true
-Djackson.deserialization.whitelist.packages=org,com,java,javax
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/ovirt-engine/dump
-Djava.util.logging.manager=org.jboss.logmanager
-Dlogging.configuration=file:///var/lib/ovirt-engine/jboss_runtime/config/ovirt-engine-logging.properties
-Dorg.jboss.resolver.warning=true
-Djboss.modules.system.pkgs=org.jboss.byteman
-Djboss.server.default.config=ovirt-engine
-Djboss.home.dir=/usr/share/ovirt-engine-wildfly
-Djboss.server.base.dir=/usr/share/ovirt-engine
-Djboss.server.data.dir=/var/lib/ovirt-engine
-Djboss.server.log.dir=/var/log/ovirt-engine
-Djboss.server.config.dir=/var/lib/ovirt-engine/jboss_runtime/config
-Djboss.server.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp
-Djboss.controller.temp.dir=/var/lib/ovirt-engine/jboss_runtime/tmp
-jar /usr/share/ovirt-engine-wildfly/jboss-modules.jar -mp
/usr/share/ovirt-engine-wildfly-overlay/modules:/usr/share/ovirt-engine/modules/common:/usr/share/ovirt-engine-extension-aaa-jdbc/modules:/usr/share/ovirt-engine-extension-aaa-ldap/modules:/usr/share/ovirt-engine-wildfly/modules
-jaxpmodule javax.xml.jaxp-provider org.jboss.as.standalone -c
ovirt-engine.xml
2019-12-19 01:04:15,040::log_utils.py::end_log_task::670::nose::INFO::
# add_ldap_provider: [32mSuccess [0m (in 0:01:08)
2019-12-19 01:05:12,552::log_utils.py::start_log_task::655::nose::INFO::
# Failure: RuntimeError (test api call failed): [0m [0m
So probably the engine didn't manage to start yet and connecting to
its api failed [2]. Perhaps wait, after starting it, until it's really
up (checking the api or health)?
[1] https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/17803/arti...
[2] https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/17803/test...
> _______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/infra@ovirt.org/message/P3BLXLG63LC...
--
Didi
4 years, 11 months
vdsm [check-patch] failures - error with nmstate repo
by Liran Rotenberg
Hi,
VDSM check patch keep failing,
Console output taken from[2]:
*15:33:30* failure: repodata/repomd.xml from nmstate: [Errno 256] No
more mirrors to try.*15:33:30*
https://copr-be.cloud.fedoraproject.org/results/nmstate/nmstate-git-fedor...:
[Errno 14] HTTPS Error 404 - Not Found
*15:33:33* ERROR: Command failed: *15:33:33* # /usr/bin/dnf
--installroot /var/lib/mock/fedora-30-x86_64-b4b1fa43be798f48f41c1f0af664447e-6272/root/
--releasever 30 --setopt=deltarpm=False --allowerasing
--disableplugin=local --disableplugin=spacewalk install
@buildsys-build autoconf automake createrepo dnf dnf-utils e2fsprogs
gcc gdb git iproute-tc iscsi-initiator-utils libguestfs-tools-c lshw
make mom openvswitch ovirt-imageio-common python3-augeas
python3-blivet python3-coverage python3-dateutil python3-dbus
python3-decorator python3-devel python3-dmidecode python3-inotify
python3-ioprocess-1.3.0 python3-libselinux python3-libvirt
python3-magic python3-netaddr python3-nose python3-pip
python3-policycoreutils python3-pyudev python3-pyyaml python3-requests
python3-sanlock python3-six python3-yaml rpm-build rpmlint sudo
xfsprogs --setopt=tsflags=nocontexts*15:33:33* No matches found for
the following disable plugin patterns: local, spacewalk*15:33:33*
Custom fc30-updates-debuginfo 1.8 MB/s | 13 MB
00:07 *15:33:33* Custom tested
16 MB/s | 570 kB 00:00 *15:33:33* Custom vdo
61 kB/s | 8.1 kB 00:00 *15:33:33* Custom
virt-preview 770 kB/s | 139 kB 00:00
*15:33:33* Custom nmstate 4.7 kB/s
| 341 B 00:00 *15:33:33* Error: Failed to download metadata
for repo 'nmstate': Cannot download repomd.xml: Cannot download
repodata/repomd.xml: All mirrors were tried
[1] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/16055/
[2] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/16058/
4 years, 11 months
Package does not exist and cannot find symbol even though they exist
by Prajith Kesava Prasad
Hi,
During make install build of ovirt , sometimes i get compilation errors
for eg:-
Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile
(default-compile) on project uicommonweb: Compilation failure: Compilation
failure:
[ERROR]
/home/vagrant/workspace/ovirt-engine/frontend/webadmin/modules/uicommonweb/src/main/java/org/ovirt/engine/ui/uicommonweb/models/hosts/HostListModel.java:[19,41]
package org.ovirt.engine.core.bll.gluster does not exist
But they do exists and, usually they go away after a couple of make clean
installs, but this time its not going
also i was curious if anyone managed to find the root cause for such issues?
Thanks in advance,
Prajith
4 years, 11 months
Flaky jsonrpc test? looks like timeout
by Nir Soffer
I had this failure today, that looks like a timeout, based on the test duration:
18.16s call
tests/lib/yajsonrpc/jsonrpcserver_test.py::JsonRpcServerTests::testMethodBadParameters(True)
Looking at the log it smells like the known race when connecting to server.
Also we see:
- ERROR vds.dispatcher:betterAsyncore.py:179 uncaptured python
exception, closing channel
Should never happen, means we did not handle exceptions in some handler.
- WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
Should not happen, means that we did not implement handle_write() in
some dispatcher.
See logs below.
Nir
---
Build log:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15978//artifact/c...
Error:
=================================== FAILURES ===================================
_______________ JsonRpcServerTests.testMethodBadParameters(True) _______________
self = <jsonrpcserver_test.JsonRpcServerTests
testMethod=testMethodBadParameters(True)>
use_ssl = True
@permutations(USE_SSL)
def testMethodBadParameters(self, use_ssl):
# Without a schema the server returns an internal error
ssl_ctx = self.ssl_ctx if use_ssl else None
bridge = _DummyBridge()
with constructClient(self.log, bridge, ssl_ctx) as clientFactory:
with self._client(clientFactory) as client:
with self.assertRaises(JsonRpcErrorBase) as cm:
self._callTimeout(client, "echo", [],
CALL_ID)
self.assertEqual(cm.exception.code,
> JsonRpcInternalError().code)
E AssertionError: -32605 != -32603
lib/yajsonrpc/jsonrpcserver_test.py:209: AssertionError
------------------------------ Captured log call -------------------------------
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
ERROR vds.dispatcher:betterAsyncore.py:179 uncaptured python
exception, closing channel <yajsonrpc.betterAsyncore.Dispatcher
('::1', 40156, 0, 0) at 0x7f842cd7d7f0> (<class
'ValueError'>:'b'ept-version:1.2'' contains illegal character ':'
[/usr/lib64/python3.6/asyncore.py|readwrite|108]
[/usr/lib64/python3.6/asyncore.py|handle_read_event|423]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/protocoldetector.py|handle_read|129]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|handle_socket|413]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/rpc/bindingjsonrpc.py|add_socket|54]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|createListener|379]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|StompListener|345]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|__init__|47]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|switch_implementation|86]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|init|363]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/rpc/bindingjsonrpc.py|_onAccept|57]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|set_message_handler|645]
[/usr/lib64/python3.6/asyncore.py|handle_read_event|423]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|handle_read|421]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|parse|323]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|_parse_command|245]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|decode_value|167])
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled close event
=============================== warnings summary ===============================
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.stress - is this a typo?
You can register custom marks to avoid this warning - for details,
see https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo?
You can register custom marks to avoid this warning - for details, see
https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/models.py:442
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/models.py:442:
DeprecationWarning: invalid escape sequence \D
nicsRexp = re.compile("^(\D*)(\d*)(.*)$")
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/link/validator.py:38
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/link/validator.py:38:
DeprecationWarning: invalid escape sequence \w
bond for bond in bonds if not re.match('^bond\w+$', bond)
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/link/validator.py:44
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/network/link/validator.py:44:
DeprecationWarning: invalid escape sequence \w
and not re.match('^bond\w+$', net_attrs['bonding'])
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:78
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:78:
DeprecationWarning: invalid escape sequence \d
_SSH_AUTH_RE = b'(SSH_AUTH_SOCK)=([^;]+).*;\nSSH_AGENT_PID=(\d+)'
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:1420
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:1420:
DeprecationWarning: invalid escape sequence \^
'(?P<m2_base>[0-9]+){sp}\^{sp}(?P<m2_exp>{exp}))'.format(
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.xpass - is this a typo?
You can register custom marks to avoid this warning - for details, see
https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
-- Docs: https://docs.pytest.org/en/latest/warnings.html
----------- coverage: platform linux, python 3.6.8-final-0 -----------
Coverage HTML written to dir htmlcov-lib
========================== slowest 10 test durations ===========================
18.16s call
tests/lib/yajsonrpc/jsonrpcserver_test.py::JsonRpcServerTests::testMethodBadParameters(True)
2.92s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client_concurrency(False)
2.86s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client(True)
2.76s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client_concurrency(True)
2.63s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client(False)
1.28s call tests/pywatch_test.py::TestPyWatch::test_timeout_backtrace
0.89s call tests/pywatch_test.py::TestPyWatch::test_kill_grandkids
0.89s call tests/lib/protocoldetector_test.py::AcceptorTests::test_detect_slow_client_concurrency(True)
0.88s call tests/pywatch_test.py::TestPyWatch::test_timeout_output
0.80s call tests/lib/protocoldetector_test.py::AcceptorTests::test_detect_slow_client(False)
=========================== short test summary info ============================
4 years, 11 months