libvirt can't start in a non-TLS environment after host install
by Milan Zamazal
Hi, I've experienced a problem with host deploy and oVirt master last
week in an environment with TLS disabled. When I install/reinstall a
4.4 host, it removes the following options from
/etc/libvirt/libvirtd.conf:
ca_file="/etc/pki/vdsm/certs/cacert.pem"
cert_file="/etc/pki/vdsm/certs/vdsmcert.pem"
key_file="/etc/pki/vdsm/keys/vdsmkey.pem"
As a result, libvirt refuses to start, complaining about missing
certificates and keys in their default locations.
Does anybody who uses a non-TLS environment experience the same problem?
Can it be related to the fact that we require libvirtd-tls service from
the split libvirtd services now?
(Yes, I know TLS should always be used, but that is a shared development
environment where TLS is disabled for whatever reason.)
Thanks,
Milan
11 months
Why is vdsm enabled by default?
by Yedidyah Bar David
If I do e.g.:
1. Install CentOS
2. yum install ovirt-releaseSOMETHING
3. yum install vdsm
Then reboot the machine, vdsm starts, and for this, it does all kinds of
things to the system (such as configure various services using vdsm-tool
etc.). Are we sure we want/need this? Why would we want vdsm
configured/running at all at this stage, before being added to an engine?
In particular, if (especially during development) we have a bug in this
configuration process, and then fix it, it might not be enough to upgrade
vdsm - the tooling will then also have to fix the changes done by the buggy
previous version, or require a full machine reinstall.
Thanks and best regards,
--
Didi
11 months
what's up with ansible-runner-service-dev package?
by Fedor Gavrilov
Hi everyone,
After I screwed up my dev environment beyond repair I have to set it up again and I see something I haven't last time (CentOS 7):
$ sudo yum install ansible-runner-service-dev
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.hosting90.cz
* centos-sclo-rh: mirror.hosting90.cz
* centos-sclo-sclo: mirror.hosting90.cz
* extras: mirror.hosting90.cz
* ovirt-master-epel: mirror.hosting90.cz
* ovirt-master-snapshot: ftp.nluug.nl
* ovirt-master-snapshot-static: ftp.nluug.nl
* updates: mirror.hosting90.cz
No package ansible-runner-service-dev available.
Maybe something's missing from my notes, but where did it go? Should I add some repo?
Fedor
11 months
OST fails in 002_bootstrap_pytest.py - setup_storage.sh
by Nir Soffer
Looks like infrastructure issue setting up storage on engine host.
Here are 2 failing builds with unrelated changes:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6677/
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6678/
Is this a known issue?
Error Message
AssertionError: setup_storage.sh failed. Exit code is 1 assert 1 == 0 -1
+0
Stacktrace
prefix = <ovirtlago.prefix.OvirtPrefix object at 0x7f6fd2b998d0>
@pytest.mark.run(order=14)
def test_configure_storage(prefix):
engine = prefix.virt_env.engine_vm()
result = engine.ssh(
[
'/tmp/setup_storage.sh',
],
)
> assert result.code == 0, 'setup_storage.sh failed. Exit code is %s'
% result.code
E AssertionError: setup_storage.sh failed. Exit code is 1
E assert 1 == 0
E -1
E +0
The pytest traceback is nice, but in this case it is does not show any
useful info.
Since we run a script using ssh, the error message should include the
process stdout and stderr
which probably can explain the failure.
Also I wonder why this code is called as a test (test_configure_storage).
This looks like setup
step so it should run as a fixture.
Nir
11 months
Project gating (Zuul) is currently unavailable
by Ehud Yonasi
Hey everyone,
Software factory (Zuul is one of their products) has been migrated to a
new host, and because of it Zuul is currently unavailable.
At the moment you can directly merge the projects that are currently being
gated - OST + ovirt-provider-ovn.
I will update you once the issue is resolved.
Thanks,
Ehud.
11 months
Unable to run create-brick.yml from ansible runner dev
by Kaustav Majumder
Hi all,
I am trying to run create-brick playbook from ansible-runner integrated
inside ovirt-engine but it throws errors as
```
Running command: CreateBrickCommand internal: false. Entities affected :
ID: 45b7ce67-9939-4d5f-a1ba-92411356b7e6 Type: VDSAction group
MANIPULATE_HOST with role type ADMIN 2020-03-27 15:43:07,518+05 ERROR
[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default
task-29) [5ff56203-33ca-424b-9137-491cf1ef2cd3] Exception: Failed to
execute call to start playbook. 2020-03-27 15:43:07,519+05 ERROR
[org.ovirt.engine.core.bll.gluster.CreateBrickCommand] (default task-29)
[5ff56203-33ca-424b-9137-491cf1ef2cd3] Failed to execute Ansible create
brick role. Please check logs for more details:
/home/kmajumde/work/ovirt-engine-builds/27-03-global-vol-options/var/log/ovirt-engine/brick-setup/ovirt-gluster-brick-ansible-20200327154307-10.70.35.17-5ff56203-33ca-424b-9137-491cf1ef2cd3.log
2020-03-27 15:43:07,519+05 ERROR
[org.ovirt.engine.core.bll.gluster.CreateBrickCommand] (default task-29)
[5ff56203-33ca-424b-9137-491cf1ef2cd3] Command
'org.ovirt.engine.core.bll.gluster.CreateBrickCommand' failed:
EngineException: Failed to execute Ansible create brick role. Please check
logs for more details:
/home/kmajumde/work/ovirt-engine-builds/27-03-global-vol-options/var/log/ovirt-engine/brick-setup/ovirt-gluster-brick-ansible-20200327154307-10.70.35.17-5ff56203-33ca-424b-9137-491cf1ef2cd3.log
(Failed with error GeneralException and code 100) 2020-03-27
15:43:07,526+05 ERROR
[org.ovirt.engine.core.bll.gluster.CreateBrickCommand] (default task-29)
[5ff56203-33ca-424b-9137-491cf1ef2cd3] Transaction rolled-back for command
'org.ovirt.engine.core.bll.gluster.CreateBrickCommand'. 2020-03-27
15:43:07,530+05 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-29) [5ff56203-33ca-424b-9137-491cf1ef2cd3] EVENT_ID:
CREATE_GLUSTER_BRICK_FAILED(4,139), Failed to create brick brick-1 on host
10.70.35.17 of cluster Default. ```
*The corresponding log is not created*
kmajumde:~/work/git/ovirt-engine] global_vol_options+ 7s ± less
/home/kmajumde/work/ovirt-engine-builds/27-03-global-vol-options/var/log/ovirt-engine/brick-setup/ovirt-gluster-brick-ansible-20200327154307-10.70.35.17-5ff56203-33ca-424b-9137-491cf1ef2cd3.log
/home/kmajumde/work/ovirt-engine-builds/27-03-global-vol-options/var/log/ovirt-engine/brick-setup/ovirt-gluster-brick-ansible-20200327154307-10.70.35.17-5ff56203-33ca-424b-9137-491cf1ef2cd3.log:
No such file or directory
*checking ansible-runner status gives*
*[kmajumde:~/work/git/ovirt-engine] global_vol_options+ 1m1s 130 ±
systemctl status ansible-runner-service.service ●
ansible-runner-service.service - Ansible Runner Service Loaded: loaded
(/usr/lib/systemd/system/ansible-runner-service.service; disabled; vendor
preset: disabled) Active: active (running) since Fri 2020-03-27 15:42:56
IST; 8min ago Main PID: 30097 (gunicorn-3) Tasks: 3 (limit: 4915)
Memory: 62.1M CGroup: /system.slice/ansible-runner-service.service
├─30097 /usr/bin/python3 /usr/bin/gunicorn-3 -b localhost:50001 -w 2
runner_service.wsgi:application ├─30112 /usr/bin/python3
/usr/bin/gunicorn-3 -b localhost:50001 -w 2
runner_service.wsgi:application └─30123 /usr/bin/python3
/usr/bin/gunicorn-3 -b localhost:50001 -w 2
runner_service.wsgi:applicationMar 27 15:42:58 localhost.localdomain
gunicorn-3[30097]: Request received, content-type :NoneMar 27 15:42:58
localhost.localdomain gunicorn-3[30097]: 127.0.0.1 - GET
/api/v1/hosts/10.70.35.1 <http://10.70.35.1>Mar 27 15:43:01
localhost.localdomain gunicorn-3[30097]: Request received, content-type
:NoneMar 27 15:43:01 localhost.localdomain gunicorn-3[30097]: 127.0.0.1 -
GET /api/v1/hosts/10.70.35.1 <http://10.70.35.1>Mar 27 15:43:03
localhost.localdomain gunicorn-3[30097]: Request received, content-type
:NoneMar 27 15:43:03 localhost.localdomain gunicorn-3[30097]: 127.0.0.1 -
GET /api/v1/hosts/10.70.35.1 <http://10.70.35.1>Mar 27 15:43:07
localhost.localdomain gunicorn-3[30097]: Request received, content-type
:NoneMar 27 15:43:07 localhost.localdomain gunicorn-3[30097]: 127.0.0.1 -
GET /api/v1/hosts/10.70.35.17 <http://10.70.35.17>Mar 27 15:43:07
localhost.localdomain gunicorn-3[30097]: Request received, content-type
:application/json; charset=UTF-8Mar 27 15:43:07 localhost.localdomain
gunicorn-3[30097]: 127.0.0.1 - POST /api/v1/playbooks/create-brick.yml*
*Is there a way to find the root cause for this?*
--
Thanks,
Kaustav Majumder
11 months, 1 week
[VDSM] VdsmClientTests.test_event_handler - flaky test?
by Nir Soffer
I had 2 of these unrelated failures today. Would be nice to mark this
test as broken on CI.
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/19353//artifact/c...
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/19354//artifact/c...
=================================== FAILURES ===================================
______________________ VdsmClientTests.test_event_handler ______________________
self = <stomprpcclient_test.VdsmClientTests testMethod=test_event_handler>
def test_event_handler(self):
with self._create_client() as client:
event_queue = queue.Queue()
sub_id = client.subscribe(EVENT_TOPIC, event_queue)
> client.Test.sendEvent()
lib/yajsonrpc/stomprpcclient_test.py:215:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <stomprpcclient_test._MockedClient object at 0x7fe19f6db1d0>
namespace = 'Test', method_name = 'sendEvent', kwargs = {}
method = 'Test.sendEvent', timeout = 3
req = {'jsonrpc': '2.0', 'method': 'Test.sendEvent', 'params': {},
'id': 'c3985771-6f77-4431-9ed6-15082063a2d5'}
responses = None
def _call(self, namespace, method_name, **kwargs):
"""
Client call method, executes a given command
Args:
namespace (string): namespace name
method_name (string): method name
**kwargs: Arbitrary keyword arguments
Returns:
method result
Raises:
ClientError: in case of an error in the protocol.
TimeoutError: if there is no response after a pre configured time.
ServerError: in case of an error while executing the command
"""
method = namespace + "." + method_name
timeout = kwargs.pop("_timeout", self._default_timeout)
req = yajsonrpc.JsonRpcRequest(
method, kwargs, reqId=str(uuid.uuid4()))
try:
responses = self._client.call(
req, timeout=timeout, flow_id=self._flow_id)
except EnvironmentError as e:
raise ClientError(method, kwargs, e)
if not responses:
> raise TimeoutError(method, kwargs, timeout)
E vdsm.client.TimeoutError: Request Test.sendEvent with args
{} timed out after 3 seconds
../lib/vdsm/client.py:294: TimeoutError
------------------------------ Captured log call -------------------------------
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled read event
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled write event
ERROR vds.dispatcher:betterAsyncore.py:179 uncaptured python
exception, closing channel <yajsonrpc.betterAsyncore.Dispatcher
('::1', 36398, 0, 0) at 0x7fe19f6db208> (<class
'ValueError'>:'b'ept-version:1.2'' contains illegal character ':'
[/usr/lib64/python3.6/asyncore.py|readwrite|108]
[/usr/lib64/python3.6/asyncore.py|handle_read_event|423]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/protocoldetector.py|handle_read|129]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|handle_socket|413]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/rpc/bindingjsonrpc.py|add_socket|54]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|createListener|379]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|StompListener|345]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|__init__|47]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|switch_implementation|86]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|init|363]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/rpc/bindingjsonrpc.py|_onAccept|57]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|set_message_handler|645]
[/usr/lib64/python3.6/asyncore.py|handle_read_event|423]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|handle_read|421]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|parse|323]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|_parse_command|245]
[/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py|decode_value|167])
WARNING vds.dispatcher:betterAsyncore.py:179 unhandled close event
ERROR root:concurrent.py:267 FINISH thread <Thread(JsonRpc
(StompReactor), started daemon 140606575712000)> failed
Traceback (most recent call last):
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/common/concurrent.py",
line 260, in run
ret = func(*args, **kwargs)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py",
line 393, in process_requests
self._reactor.process_requests()
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py",
line 238, in process_requests
timeout=self._get_timeout(self._map),
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py",
line 253, in _get_timeout
interval = disp.next_check_interval()
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py",
line 99, in next_check_interval
return getattr(self.__impl, "next_check_interval", default_func)()
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py",
line 486, in next_check_interval
self.handle_timeout()
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py",
line 451, in handle_timeout
self._frame_handler.handle_timeout(self)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompclient.py",
line 130, in handle_timeout
dispatcher._on_timeout)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stomp.py",
line 630, in reconnect
AsyncDispatcher(self, self._async_client, count=count))
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py",
line 271, in reconnect
dispatcher.create_socket(address, sslctx)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py",
line 112, in create_socket
sock = sslctx.wrapSocket(sock)
File "/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/sslutils.py",
line 113, in wrapSocket
ca_certs=self.ca_certs)
File "/usr/lib64/python3.6/ssl.py", line 1114, in wrap_socket
ciphers=ciphers)
File "/usr/lib64/python3.6/ssl.py", line 704, in __init__
self._context.load_verify_locations(ca_certs)
FileNotFoundError: [Errno 2] No such file or directory
=============================== warnings summary ===============================
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.stress - is this a typo?
You can register custom marks to avoid this warning - for details,
see https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.slow - is this a typo?
You can register custom marks to avoid this warning - for details, see
https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:79
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:79:
DeprecationWarning: invalid escape sequence \d
_SSH_AUTH_RE = b'(SSH_AUTH_SOCK)=([^;]+).*;\nSSH_AGENT_PID=(\d+)'
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:1421
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/v2v.py:1421:
DeprecationWarning: invalid escape sequence \^
'(?P<m2_base>[0-9]+){sp}\^{sp}(?P<m2_exp>{exp}))'.format(
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/.tox/lib/lib/python3.6/site-packages/_pytest/mark/structures.py:334:
PytestUnknownMarkWarning: Unknown pytest.mark.xpass - is this a typo?
You can register custom marks to avoid this warning - for details, see
https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,
tests/lib/osinfo_test.py::test_package_versions
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/osinfo.py:284:
UnicodeWarning: decode() called on unicode string, see
https://bugzilla.redhat.com/show_bug.cgi?id=1693751
'version': mi['version'].decode('utf-8'),
tests/lib/osinfo_test.py::test_package_versions
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/osinfo.py:285:
UnicodeWarning: decode() called on unicode string, see
https://bugzilla.redhat.com/show_bug.cgi?id=1693751
'release': mi['release'].decode('utf-8'),
-- Docs: https://docs.pytest.org/en/latest/warnings.html
----------- coverage: platform linux, python 3.6.8-final-0 -----------
Coverage HTML written to dir htmlcov-lib
========================== slowest 10 test durations ===========================
18.29s call
tests/lib/yajsonrpc/stomprpcclient_test.py::VdsmClientTests::test_event_handler
2.96s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client_concurrency(True)
2.90s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client_concurrency(False)
2.75s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client(False)
2.63s call tests/lib/protocoldetector_test.py::AcceptorTests::test_reject_very_slow_client(True)
1.90s call tests/pywatch_test.py::TestPyWatch::test_timeout_backtrace
1.44s call tests/pywatch_test.py::TestPyWatch::test_timeout_output
1.19s call tests/pywatch_test.py::TestPyWatch::test_kill_grandkids
0.93s call tests/lib/protocoldetector_test.py::AcceptorTests::test_detect_slow_client(True)
0.90s call tests/lib/yajsonrpc/stomp_test.py::StompTests::test_echo(16384,
False)
=========================== short test summary info ============================
11 months, 1 week