<div dir="ltr"><div class="gmail_default" style="font-family:arial,helvetica,sans-serif"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 30, 2017 at 8:43 PM, Dan Kenigsberg <span dir="ltr">&lt;<a href="mailto:danken@redhat.com" target="_blank">danken@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Aug 30, 2017 at 6:47 PM, Dominik Holler &lt;<a href="mailto:dholler@redhat.com">dholler@redhat.com</a>&gt; wrote:<br>
&gt; On Wed, 30 Aug 2017 13:31:38 +0200<br>
&gt; Dominik Holler &lt;<a href="mailto:dholler@redhat.com">dholler@redhat.com</a>&gt; wrote:<br>
&gt;<br>
&gt;&gt; On Wed, 30 Aug 2017 14:18:49 +0300<br>
&gt;&gt; Dan Kenigsberg &lt;<a href="mailto:danken@redhat.com">danken@redhat.com</a>&gt; wrote:<br>
&gt;&gt;<br>
&gt;&gt; &gt; On Wed, Aug 30, 2017 at 1:40 PM, Barak Korren &lt;<a href="mailto:bkorren@redhat.com">bkorren@redhat.com</a>&gt;<br>
&gt;&gt; &gt; wrote:<br>
&gt;&gt; &gt; &gt; Test failed: [ 002_bootstrap.add_hosts ]<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; Link to suspected patches:<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; We suspect this is due to change to LLDPAD in upstream CentOS<br>
&gt;&gt; &gt; &gt; repos. We can&#39;t tell the exact point it was introduced because<br>
&gt;&gt; &gt; &gt; other ovirt-engine regressions introduced too much noise into the<br>
&gt;&gt; &gt; &gt; system.<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; It also seems that the failure is not 100% reproducible sine we<br>
&gt;&gt; &gt; &gt; have runs that do not encounter it.<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; Link to Job:<br>
&gt;&gt; &gt; &gt; <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>ovirt-master_change-queue-<wbr>tester/2151</a><br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; (This job checked a specific patch to ovirt-log-collector but<br>
&gt;&gt; &gt; &gt; failure seems unrelated, and also happens on vanille &#39;tested&#39; repo<br>
&gt;&gt; &gt; &gt; right now).<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; Link to all logs:<br>
&gt;&gt; &gt; &gt; VDSM logs seem to be most relevant here:<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/vdsm/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>ovirt-master_change-queue-<wbr>tester/2151/artifact/exported-<wbr>artifacts/basic-suit-master-<wbr>el7/test_logs/basic-suite-<wbr>master/post-002_bootstrap.py/<wbr>lago-basic-suite-master-host-<wbr>0/_var_log/vdsm/</a><br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; Error snippet from log:<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; - From VDSM logs (note it exists very quickly):<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; &lt;error&gt;<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,208-0400 INFO  (MainThread) [vds] (PID: 4594)<br>
&gt;&gt; &gt; &gt; I am the actual vdsm 4.20.2-133.git0ce4485.el7.<wbr>centos<br>
&gt;&gt; &gt; &gt; lago-basic-suite-master-host-0 (3.10.0-514.2.2.el7.x86_64)<br>
&gt;&gt; &gt; &gt; (vdsmd:148) 2017-08-30 05:55:52,209-0400 INFO  (MainThread) [vds]<br>
&gt;&gt; &gt; &gt; VDSM will run with cpu affinity: frozenset([1]) (vdsmd:254)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,253-0400 INFO  (MainThread) [storage.check]<br>
&gt;&gt; &gt; &gt; Starting check service (check:92)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,257-0400 INFO  (MainThread)<br>
&gt;&gt; &gt; &gt; [storage.Dispatcher] Starting StorageDispatcher... (dispatcher:48)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,257-0400 INFO  (check/loop)<br>
&gt;&gt; &gt; &gt; [storage.asyncevent] Starting &lt;EventLoop running=True<br>
&gt;&gt; &gt; &gt; closed=False at 0x45317520&gt; (asyncevent:125)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] START<br>
&gt;&gt; &gt; &gt; registerDomainStateChangeCallb<wbr>ack(callbackFunc=&lt;functools.<wbr>partial<br>
&gt;&gt; &gt; &gt; object at 0x2b47b50&gt;) from=internal,<br>
&gt;&gt; &gt; &gt; task_id=2cebe9ef-358e-47d7-<wbr>81a6-8c4b54b9cd6d (api:46)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [vdsm.api] FINISH<br>
&gt;&gt; &gt; &gt; registerDomainStateChangeCallb<wbr>ack return=None from=internal,<br>
&gt;&gt; &gt; &gt; task_id=2cebe9ef-358e-47d7-<wbr>81a6-8c4b54b9cd6d (api:52)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,288-0400 INFO  (MainThread) [MOM] Preparing<br>
&gt;&gt; &gt; &gt; MOM interface (momIF:53)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [MOM] Using named<br>
&gt;&gt; &gt; &gt; unix socket /var/run/vdsm/mom-vdsm.sock (momIF:62)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,289-0400 INFO  (MainThread) [root]<br>
&gt;&gt; &gt; &gt; Unregistering all secrets (secret:91)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,307-0400 INFO  (MainThread) [vds] Setting<br>
&gt;&gt; &gt; &gt; channels&#39; timeout to 30 seconds. (vmchannels:221)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,309-0400 INFO  (vmrecovery) [vds] recovery:<br>
&gt;&gt; &gt; &gt; completed in 0s (clientIF:516)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,310-0400 INFO  (MainThread)<br>
&gt;&gt; &gt; &gt; [vds.MultiProtocolAcceptor] Listening at :::54321<br>
&gt;&gt; &gt; &gt; (protocoldetector:196)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,496-0400 INFO  (http) [vds] Server running<br>
&gt;&gt; &gt; &gt; (http:58) 2017-08-30 05:55:52,742-0400 INFO  (periodic/1)<br>
&gt;&gt; &gt; &gt; [vdsm.api] START repoStats(domains=()) from=internal,<br>
&gt;&gt; &gt; &gt; task_id=85768015-8ecb-48e3-<wbr>9307-f671bfc33c65 (api:46)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,743-0400 INFO  (periodic/1) [vdsm.api] FINISH<br>
&gt;&gt; &gt; &gt; repoStats return={} from=internal,<br>
&gt;&gt; &gt; &gt; task_id=85768015-8ecb-48e3-<wbr>9307-f671bfc33c65 (api:52)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,743-0400 WARN  (periodic/1) [throttled] MOM<br>
&gt;&gt; &gt; &gt; not available. (throttledlog:103)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:52,744-0400 WARN  (periodic/1) [throttled] MOM<br>
&gt;&gt; &gt; &gt; not available, KSM stats will be missing. (throttledlog:103)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,043-0400 INFO  (MainThread) [vds] Received<br>
&gt;&gt; &gt; &gt; signal 15, shutting down (vdsmd:67)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,045-0400 INFO  (MainThread)<br>
&gt;&gt; &gt; &gt; [jsonrpc.JsonRpcServer] Stopping JsonRPC Server (__init__:759)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,049-0400 INFO  (MainThread) [vds] Stopping<br>
&gt;&gt; &gt; &gt; http server (http:79)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,049-0400 INFO  (http) [vds] Server stopped<br>
&gt;&gt; &gt; &gt; (http:69) 2017-08-30 05:55:55,050-0400 INFO  (MainThread) [root]<br>
&gt;&gt; &gt; &gt; Unregistering all secrets (secret:91)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,052-0400 INFO  (MainThread) [vdsm.api] START<br>
&gt;&gt; &gt; &gt; prepareForShutdown(options=<wbr>None) from=internal,<br>
&gt;&gt; &gt; &gt; task_id=ffde5caa-fa44-49ab-<wbr>bdd1-df81519680a3 (api:46)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,089-0400 INFO  (MainThread) [storage.Monitor]<br>
&gt;&gt; &gt; &gt; Shutting down domain monitors (monitor:222)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,090-0400 INFO  (MainThread) [storage.check]<br>
&gt;&gt; &gt; &gt; Stopping check service (check:105)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,090-0400 INFO  (check/loop)<br>
&gt;&gt; &gt; &gt; [storage.asyncevent] Stopping &lt;EventLoop running=True<br>
&gt;&gt; &gt; &gt; closed=False at 0x45317520&gt; (asyncevent:220)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,090-0400 INFO  (MainThread) [vdsm.api] FINISH<br>
&gt;&gt; &gt; &gt; prepareForShutdown return=None from=internal,<br>
&gt;&gt; &gt; &gt; task_id=ffde5caa-fa44-49ab-<wbr>bdd1-df81519680a3 (api:52)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,091-0400 INFO  (MainThread) [vds] Stopping<br>
&gt;&gt; &gt; &gt; threads (vdsmd:159)<br>
&gt;&gt; &gt; &gt; 2017-08-30 05:55:55,091-0400 INFO  (MainThread) [vds] Exiting<br>
&gt;&gt; &gt; &gt; (vdsmd:170)<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; &lt;/error&gt;<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; - From SuperVDSM logs:<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; &lt;error&gt;<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; MainThread::ERROR::2017-08-30<br>
&gt;&gt; &gt; &gt; 05:55:59,476::initializer::53:<wbr>:root::(_lldp_init) Failed to enable<br>
&gt;&gt; &gt; &gt; LLDP on eth1<br>
&gt;&gt; &gt; &gt; Traceback (most recent call last):<br>
&gt;&gt; &gt; &gt;   File<br>
&gt;&gt; &gt; &gt; &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/network/<wbr>initializer.py&quot;,<br>
&gt;&gt; &gt; &gt; line 51, in _lldp_init Lldp.enable_lldp_on_iface(<wbr>device)<br>
&gt;&gt; &gt; &gt;   File<br>
&gt;&gt; &gt; &gt; &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/network/lldp/<wbr>lldpad.py&quot;,<br>
&gt;&gt; &gt; &gt; line 30, in enable_lldp_on_iface<br>
&gt;&gt; &gt; &gt; lldptool.enable_lldp_on_iface(<wbr>iface, rx_only) File<br>
&gt;&gt; &gt; &gt; &quot;/usr/lib/python2.7/site-<wbr>packages/vdsm/network/lldpad/<wbr>lldptool.py&quot;,<br>
&gt;&gt; &gt; &gt; line 46, in enable_lldp_on_iface raise EnableLldpError(rc, out,<br>
&gt;&gt; &gt; &gt; err, iface) EnableLldpError: (1,<br>
&gt;&gt; &gt; &gt; &quot;timeout\n&#39;<wbr>M00000001C3040000000c04eth1000<wbr>badminStatus0002rx&#39;<br>
&gt;&gt; &gt; &gt; command timed out.\n&quot;, &#39;&#39;, &#39;eth1&#39;)<br>
&gt;&gt; &gt; &gt; MainThread::DEBUG::2017-08-30<br>
&gt;&gt; &gt; &gt; 05:55:59,477::cmdutils::133::<wbr>root::(exec_cmd) /usr/sbin/lldptool<br>
&gt;&gt; &gt; &gt; get-lldp -i eth0 adminStatus (cwd None)<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt;<br>
&gt;&gt; &gt; &gt; &lt;/error&gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; The LLDP errors are due to SELinux, to be fixed in CentOS-7.4.1.<br>
&gt;&gt; &gt; They do not cause supervdsmd to stop, so they are not the reason<br>
&gt;&gt; &gt; for vdsm&#39;s failure to start. Something ELSE is doing this.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; in<br>
&gt;&gt; &gt; <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/2151/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/messages/*view*/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>ovirt-master_change-queue-<wbr>tester/2151/artifact/exported-<wbr>artifacts/basic-suit-master-<wbr>el7/test_logs/basic-suite-<wbr>master/post-002_bootstrap.py/<wbr>lago-basic-suite-master-host-<wbr>0/_var_log/messages/*view*/</a><br>
&gt;&gt; &gt; we have<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Aug 30 05:55:52 lago-basic-suite-master-host-0 journal: vdsm<br>
&gt;&gt; &gt; throttled WARN MOM not available.<br>
&gt;&gt; &gt; Aug 30 05:55:52 lago-basic-suite-master-host-0 journal: vdsm<br>
&gt;&gt; &gt; throttled WARN MOM not available, KSM stats will be missing.<br>
&gt;&gt; &gt; Aug 30 05:55:53 lago-basic-suite-master-host-0 systemd: Started<br>
&gt;&gt; &gt; Session 10 of user root.<br>
&gt;&gt; &gt; Aug 30 05:55:53 lago-basic-suite-master-host-0 systemd-logind: New<br>
&gt;&gt; &gt; session 10 of user root.<br>
&gt;&gt; &gt; Aug 30 05:55:53 lago-basic-suite-master-host-0 systemd: Starting<br>
&gt;&gt; &gt; Session 10 of user root.<br>
&gt;&gt; &gt; Aug 30 05:55:53 lago-basic-suite-master-host-0 python: ansible-setup<br>
&gt;&gt; &gt; Invoked with filter=* gather_subset=[&#39;all&#39;]<br>
&gt;&gt; &gt; fact_path=/etc/ansible/facts.d gather_timeout=10<br>
&gt;&gt; &gt; Aug 30 05:55:54 lago-basic-suite-master-host-0 python:<br>
&gt;&gt; &gt; ansible-command Invoked with warn=True executable=None<br>
&gt;&gt; &gt; _uses_shell=False _raw_params=bash -c &quot;rpm -qi vdsm | grep -oE<br>
&gt;&gt; &gt; &#39;Version\\s+:\\s+[0-9\\.]+&#39; | awk &#39;{print $3}&#39;&quot; removes=None<br>
&gt;&gt; &gt; creates=None chdir=None<br>
&gt;&gt; &gt; Aug 30 05:55:54 lago-basic-suite-master-host-0 python:<br>
&gt;&gt; &gt; ansible-systemd Invoked with no_block=False name=libvirt-guests<br>
&gt;&gt; &gt; enabled=True daemon_reload=False state=started user=False<br>
&gt;&gt; &gt; masked=None Aug 30 05:55:54 lago-basic-suite-master-host-0 systemd:<br>
&gt;&gt; &gt; Reloading. Aug 30 05:55:55 lago-basic-suite-master-host-0 systemd:<br>
&gt;&gt; &gt; Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring:<br>
&gt;&gt; &gt; Unit is masked. Aug 30 05:55:55 lago-basic-suite-master-host-0<br>
&gt;&gt; &gt; systemd: Stopped MOM instance configured for VDSM purposes.<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 systemd: Stopping<br>
&gt;&gt; &gt; Virtual Desktop Server Manager...<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 systemd: Starting<br>
&gt;&gt; &gt; Suspend Active Libvirt Guests...<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 systemd: Started<br>
&gt;&gt; &gt; Suspend Active Libvirt Guests.<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 journal: libvirt<br>
&gt;&gt; &gt; version: 2.0.0, package: 10.el7_3.9 (CentOS BuildSystem<br>
&gt;&gt; &gt; &lt;<a href="http://bugs.centos.org" rel="noreferrer" target="_blank">http://bugs.centos.org</a>&gt;, 2017-05-25-20:52:28, <a href="http://c1bm.rdu2.centos.org" rel="noreferrer" target="_blank">c1bm.rdu2.centos.org</a>)<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 journal: hostname:<br>
&gt;&gt; &gt; lago-basic-suite-master-host-<wbr>0.lago.local<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 vdsmd_init_common.sh:<br>
&gt;&gt; &gt; vdsm: Running run_final_hooks<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 journal: End of file<br>
&gt;&gt; &gt; while reading data: Input/output error<br>
&gt;&gt; &gt; Aug 30 05:55:55 lago-basic-suite-master-host-0 systemd: Stopped<br>
&gt;&gt; &gt; Virtual Desktop Server Manager.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; suggesting that systemd stopped vdsmd since mom did not start on<br>
&gt;&gt; &gt; time. We have an extremely ugly two-way dependency between the two<br>
&gt;&gt; &gt; services. Each depends on the other one. I am guessing that<br>
&gt;&gt; &gt; systemd&#39;s grace period for such a cycle has expired, and the couple<br>
&gt;&gt; &gt; was taken down.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; Does anybody have a guess what could cause such slowlness?<br>
&gt;&gt;<br>
&gt;&gt; Even if the LLDP errors does not have a negative impact of the<br>
&gt;&gt; functional behavior of supervdsmd, they have a impact timing behavior<br>
&gt;&gt; of the initialization of supervdsmd.<br>
&gt;&gt;<br>
&gt;&gt; I will post a patch today which moves the initialization of LLDP in an<br>
&gt;&gt; extra thread, to make the initialization of LLDP asynchronly.<br>
&gt;&gt;<br>
&gt;&gt;<br>
&gt;<br>
&gt; I analyzed this issue further by running OST [1] while disabled [2] the<br>
&gt; suspected initialization of LLDP in supervdsmd.<br>
&gt; Like expected, there are no log entries about enabling LLDP in [3],<br>
&gt; but the OST is still failing with same symptoms.<br>
&gt;<br>
&gt; So the reason for this failure is still unknown.<br>
&gt;<br>
&gt;<br>
&gt; [1]<br>
&gt;   <a href="http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1045/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/view/<wbr>oVirt%20system%20tests/job/<wbr>ovirt-system-tests_manual/<wbr>1045/</a><br>
&gt;<br>
&gt; [2]<br>
&gt;   <a href="https://gerrit.ovirt.org/#/c/81232/1/lib/vdsm/network/initializer.py" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/#/c/<wbr>81232/1/lib/vdsm/network/<wbr>initializer.py</a><br>
&gt;<br>
&gt; [3]<br>
&gt;   <a href="http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1045/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-0/_var_log/vdsm/supervdsm.log/*view*/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/view/<wbr>oVirt%20system%20tests/job/<wbr>ovirt-system-tests_manual/<wbr>1045/artifact/exported-<wbr>artifacts/test_logs/basic-<wbr>suite-master/post-002_<wbr>bootstrap.py/lago-basic-suite-<wbr>master-host-0/_var_log/vdsm/<wbr>supervdsm.log/*view*/</a><br>
&gt;<br>
<br>
So we&#39;re back in square one.<br>
Another possible culprit may be ansible: Vdsm is stopped two seconds<br>
after it logs to the host.<br>
<br>
Aug 30 11:26:24 lago-basic-suite-master-host-0 systemd: Starting<br>
Session 10 of user root.<br>
Aug 30 11:26:25 lago-basic-suite-master-host-0 python: ansible-setup<br>
Invoked with filter=* gather_subset=[&#39;all&#39;]<br>
fact_path=/etc/ansible/facts.d gather_timeout=10<br>
Aug 30 11:26:25 lago-basic-suite-master-host-0 python: ansible-command<br>
Invoked with warn=True executable=None _uses_shell=False<br>
_raw_params=bash -c &quot;rpm -qi vdsm | grep -oE<br>
&#39;Version\\s+:\\s+[0-9\\.]+&#39; | awk &#39;{print $3}&#39;&quot; removes=None<br>
creates=None chdir=None<br>
Aug 30 11:26:26 lago-basic-suite-master-host-0 python: ansible-systemd<br>
Invoked with no_block=False name=libvirt-guests enabled=True<br>
daemon_reload=False state=started user=False masked=None<br>
Aug 30 11:26:26 lago-basic-suite-master-host-0 systemd: Reloading.<br>
Aug 30 11:26:26 lago-basic-suite-master-host-0 systemd: Cannot add<br>
dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked.<br>
Aug 30 11:26:26 lago-basic-suite-master-host-0 systemd: Stopped MOM<br>
instance configured for VDSM purposes.<br>
Aug 30 11:26:26 lago-basic-suite-master-host-0 systemd: Stopping<br>
Virtual Desktop Server Manager...<br>
<br>
<br>
could it be that it triggers a systemd-reload that makes systemd croak<br>
on the vdsm-mom cycle?<br></blockquote><div><br><div style="font-family:arial,helvetica,sans-serif;display:inline" class="gmail_default">​We are not restarting VDSM within ovirt-host-deploy Ansible role, the VDSM restart is performed in host-deploy part same as in previous versions.​</div> <div style="font-family:arial,helvetica,sans-serif;display:inline" class="gmail_default">​Within ovirt-host-deploy-firewalld we only enable and restart firewalld service.<br>​</div></div></div><br></div></div>