<div dir="ltr"><div class="gmail_default" style="font-family:monospace,monospace">Didi - we just saw a report of a similar failure on engine-4.1's OST.</div><div class="gmail_default" style="font-family:monospace,monospace">COuld you please backport these patches there too?</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Nov 27, 2017 at 2:57 PM, Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div class="h5">On Mon, Nov 27, 2017 at 10:38 AM, Yedidyah Bar David <span dir="ltr"><<a href="mailto:didi@redhat.com" target="_blank">didi@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_6363799937922607767gmail-">On Sun, Nov 26, 2017 at 7:24 PM, Nir Soffer <<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>> wrote:<br>
> I think we need to check and report which process is listening on a port<br>
> when starting a server on that port fail.<br>
<br>
</span>How do you know that a server was "started on that port", and that<br>
if failed specifically because it failed to bind?<br>
<br>
There is no standardized (Unix) way to mark that a service wants to<br>
listen on a specific port, or that it failed because a specific port<br>
was bound by some other process.<br>
<br>
There are various classical *inetd* daemons, and modern systemd.socket,<br>
that listen *instead* of some service. Then they can manage the port<br>
resources and perhaps do something intelligent about them.<br>
<span class="m_6363799937922607767gmail-"><br>
><br>
> Didi, do you think we can integrate this in the deploy code, or this<br>
> should be implemented in each server?<br>
<br>
</span>It should be quite easy to patch otopi's services.state to run something<br>
if start fails, e.g. 'ss -anp' or whatever you want.<br>
<br>
It should even be not-too-hard to do this in a self-contained plugin,<br>
so can be part of otopi-debug-plugins.<br>
<br>
If we decide that something needs to be implemented by each server,<br>
perhaps "something" should be to be controlled by a systemd.socket unit.<br>
Didn't try, though, to see what this actually buys us.<br>
<span class="m_6363799937922607767gmail-"><br>
><br>
> Maybe when deployment fails, the deploy code can report all the<br>
> listening sockets and the processes bound to these sockets?<br>
<br>
</span>Pushed now:<br>
<br>
<a href="https://gerrit.ovirt.org/84699" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/84699</a> core: Name TRANSACTION_INIT<br>
<a href="https://gerrit.ovirt.org/84700" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/84700</a> plugins: debug: Add debug_failure<br>
<a href="https://gerrit.ovirt.org/84701" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/84701</a> automation: Test failure<br>
<br>
Will merge soon, if all goes well.<br></blockquote><div><br></div></div></div><div>Merged them.<br><br></div>Pushed to OST:<br><br><a href="https://gerrit.ovirt.org/84710" target="_blank">https://gerrit.ovirt.org/84710</a></div><div class="gmail_quote"><div><br></div><div>Dafna - thanks for opening the bug on ovirt-imageio, but I am not<br></div><div>sure anyone can do much about it without more info, such as might<br></div><div>be provided by above patches. When I suggested below to open BZ<br></div><div>I meant on otopi or host-deploy to provide more debugging info,<br></div><div>not for imageio - obviously no harm in opening it, and it's good<br></div><div>to have it even if only for reference.<br></div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Feel free to open BZ for other things discussed above, if relevant.<br>
<div class="m_6363799937922607767gmail-HOEnZb"><div class="m_6363799937922607767gmail-h5"><br>
><br>
> Nir<br>
><br>
> On Sun, Nov 26, 2017 at 7:11 PM Gal Ben Haim <<a href="mailto:gbenhaim@redhat.com" target="_blank">gbenhaim@redhat.com</a>> wrote:<br>
>><br>
>> The failure is not consistent.<br>
>><br>
>> On Sun, Nov 26, 2017 at 5:33 PM, Yaniv Kaul <<a href="mailto:ykaul@redhat.com" target="_blank">ykaul@redhat.com</a>> wrote:<br>
>>><br>
>>><br>
>>><br>
>>> On Sun, Nov 26, 2017 at 4:53 PM, Gal Ben Haim <<a href="mailto:gbenhaim@redhat.com" target="_blank">gbenhaim@redhat.com</a>><br>
>>> wrote:<br>
>>>><br>
>>>> We still see this issue on the upgrade suite from latest release to<br>
>>>> master [1].<br>
>>>> I don't see any evidence in "/var/log/messages" [2] that<br>
>>>> "ovirt-imageio-proxy" was started twice.<br>
>>><br>
>>><br>
>>> Since it's not a registered port and a high port, could it be used by<br>
>>> something else (what are the odds though ?<br>
>>> Is it consistent?<br>
>>> Y.<br>
>>><br>
>>>><br>
>>>><br>
>>>> [1]<br>
>>>> <a href="http://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/ovirt-master_change-queue-tester/runs/4153/nodes/123/steps/241/log/?start=0" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/blue/<wbr>rest/organizations/jenkins/pip<wbr>elines/ovirt-master_change-que<wbr>ue-tester/runs/4153/nodes/123/<wbr>steps/241/log/?start=0</a><br>
>>>><br>
>>>> [2]<br>
>>>> <a href="http://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-master_change-queue-tester/4153/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/lago-upgrade-from-release-suite-master-engine/_var_log/messages/*view*/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/view/<wbr>Change%20queue%20jobs/job/ovir<wbr>t-master_change-queue-tester/<wbr>4153/artifact/exported-artifac<wbr>ts/upgrade-from-release-suit-<wbr>master-el7/test_logs/upgrade-<wbr>from-release-suite-master/<wbr>post-001_initialize_engine.py/<wbr>lago-upgrade-from-release-<wbr>suite-master-engine/_var_log/m<wbr>essages/*view*/</a><br>
>>>><br>
>>>> On Fri, Nov 24, 2017 at 8:16 PM, Dafna Ron <<a href="mailto:dron@redhat.com" target="_blank">dron@redhat.com</a>> wrote:<br>
>>>>><br>
>>>>> there were two different patches reported as failing cq today with the<br>
>>>>> ovirt-imageio-proxy service failing to start.<br>
>>>>><br>
>>>>> Here is the latest failure:<br>
>>>>> <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4130/artifact" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/o<wbr>virt-master_change-queue-teste<wbr>r/4130/artifact</a><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> On 11/23/2017 03:39 PM, Allon Mureinik wrote:<br>
>>>>><br>
>>>>> Daniel/Nir?<br>
>>>>><br>
>>>>> On Thu, Nov 23, 2017 at 5:29 PM, Dafna Ron <<a href="mailto:dron@redhat.com" target="_blank">dron@redhat.com</a>> wrote:<br>
>>>>>><br>
>>>>>> Hi,<br>
>>>>>><br>
>>>>>> We have a failing on test<br>
>>>>>> 001_initialize_engine.test_ini<wbr>tialize_engine.<br>
>>>>>><br>
>>>>>> This is failing with error Failed to start service<br>
>>>>>> 'ovirt-imageio-proxy<br>
>>>>>><br>
>>>>>><br>
>>>>>> Link and headline ofto suspected patches:<br>
>>>>>><br>
>>>>>> build: Make resulting RPMs architecture-specific -<br>
>>>>>> <a href="https://gerrit.ovirt.org/#/c/84534/" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/#/c/8<wbr>4534/</a><br>
>>>>>><br>
>>>>>><br>
>>>>>> Link to Job:<br>
>>>>>><br>
>>>>>> <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4055" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/o<wbr>virt-master_change-queue-teste<wbr>r/4055</a><br>
>>>>>><br>
>>>>>><br>
>>>>>> Link to all logs:<br>
>>>>>><br>
>>>>>><br>
>>>>>> <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4055/artifact/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/o<wbr>virt-master_change-queue-teste<wbr>r/4055/artifact/</a><br>
>>>>>><br>
>>>>>><br>
>>>>>> <a href="http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/4055/artifact/exported-artifacts/upgrade-from-release-suit-master-el7/test_logs/upgrade-from-release-suite-master/post-001_initialize_engine.py/lago-upgrade-from-release-suite-master-engine/_var_log/messages/*view*/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/o<wbr>virt-master_change-queue-teste<wbr>r/4055/artifact/exported-artif<wbr>acts/upgrade-from-release-<wbr>suit-master-el7/test_logs/<wbr>upgrade-from-release-suite-<wbr>master/post-001_initialize_<wbr>engine.py/lago-upgrade-from-<wbr>release-suite-master-engine/_<wbr>var_log/messages/*view*/</a><br>
>>>>>><br>
>>>>>><br>
>>>>>> (Relevant) error snippet from the log:<br>
>>>>>><br>
>>>>>> <error><br>
>>>>>><br>
>>>>>><br>
>>>>>> from lago log:<br>
>>>>>><br>
>>>>>> Failed to start service 'ovirt-imageio-proxy<br>
>>>>>><br>
>>>>>> messages logs:<br>
>>>>>><br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Starting Session 8 of user root.<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: Traceback (most recent call last):<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/bin/ovirt-imageio-proxy"<wbr>, line 85, in<br>
>>>>>> <module><br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: status = image_proxy.main(args, config)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File<br>
>>>>>> "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_imageio_proxy/image_<wbr>proxy.py", line<br>
>>>>>> 21, in main<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: image_server.start(config)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File<br>
>>>>>> "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_imageio_proxy/server<wbr>.py", line 45,<br>
>>>>>> in start<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: WSGIRequestHandler)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketSe<wbr>rver.py", line 419,<br>
>>>>>> in __init__<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: self.server_bind()<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/wsgiref/<wbr>simple_server.py",<br>
>>>>>> line 48, in server_bind<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: HTTPServer.server_bind(self)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/BaseHTTP<wbr>Server.py", line<br>
>>>>>> 108, in server_bind<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: SocketServer.TCPServer.server_<wbr>bind(self)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketSe<wbr>rver.py", line 430,<br>
>>>>>> in server_bind<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: self.socket.bind(self.server_a<wbr>ddress)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/socket.p<wbr>y", line 224, in<br>
>>>>>> meth<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: return getattr(self._sock,name)(*args<wbr>)<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: socket.error: [Errno 98] Address already in use<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service: main process exited, code=exited,<br>
>>>>>> status=1/FAILURE<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Failed to start oVirt ImageIO Proxy.<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Unit ovirt-imageio-proxy.service entered failed state.<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service failed.<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service holdoff time over, scheduling restart.<br>
>>>>>> Nov 23 07:30:47 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Starting oVirt ImageIO Proxy...<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: Traceback (most recent call last):<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/bin/ovirt-imageio-proxy"<wbr>, line 85, in<br>
>>>>>> <module><br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: status = image_proxy.main(args, config)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File<br>
>>>>>> "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_imageio_proxy/image_<wbr>proxy.py", line<br>
>>>>>> 21, in main<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: image_server.start(config)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File<br>
>>>>>> "/usr/lib/python2.7/site-packa<wbr>ges/ovirt_imageio_proxy/server<wbr>.py", line 45,<br>
>>>>>> in start<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: WSGIRequestHandler)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketSe<wbr>rver.py", line 419,<br>
>>>>>> in __init__<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: self.server_bind()<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/wsgiref/<wbr>simple_server.py",<br>
>>>>>> line 48, in server_bind<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: HTTPServer.server_bind(self)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/BaseHTTP<wbr>Server.py", line<br>
>>>>>> 108, in server_bind<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: SocketServer.TCPServer.server_<wbr>bind(self)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/SocketSe<wbr>rver.py", line 430,<br>
>>>>>> in server_bind<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: self.socket.bind(self.server_a<wbr>ddress)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: File "/usr/lib64/python2.7/socket.p<wbr>y", line 224, in<br>
>>>>>> meth<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: return getattr(self._sock,name)(*args<wbr>)<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine<br>
>>>>>> ovirt-imageio-proxy: socket.error: [Errno 98] Address already in use<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service: main process exited, code=exited,<br>
>>>>>> status=1/FAILURE<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Failed to start oVirt ImageIO Proxy.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Unit ovirt-imageio-proxy.service entered failed state.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service failed.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service holdoff time over, scheduling restart.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> start request repeated too quickly for ovirt-imageio-proxy.service<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Failed to start oVirt ImageIO Proxy.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> Unit ovirt-imageio-proxy.service entered failed state.<br>
>>>>>> Nov 23 07:30:48 lago-upgrade-from-release-suit<wbr>e-master-engine systemd:<br>
>>>>>> ovirt-imageio-proxy.service failed.<br>
>>>>>><br>
>>>>>> </error><br>
>>>>>><br>
>>>>>><br>
>>>>>><br>
>>>>>> ______________________________<wbr>_________________<br>
>>>>>> Infra mailing list<br>
>>>>>> <a href="mailto:Infra@ovirt.org" target="_blank">Infra@ovirt.org</a><br>
>>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/infra</a><br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>> ______________________________<wbr>_________________<br>
>>>>> Devel mailing list<br>
>>>>> <a href="mailto:Devel@ovirt.org" target="_blank">Devel@ovirt.org</a><br>
>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/devel</a><br>
>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> --<br>
>>>> GAL bEN HAIM<br>
>>>> RHV DEVOPS<br>
>>>><br>
>>>> ______________________________<wbr>_________________<br>
>>>> Devel mailing list<br>
>>>> <a href="mailto:Devel@ovirt.org" target="_blank">Devel@ovirt.org</a><br>
>>>> <a href="http://lists.ovirt.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/devel</a><br>
>>><br>
>>><br>
>><br>
>><br>
>><br>
>> --<br>
>> GAL bEN HAIM<br>
>> RHV DEVOPS<br>
>> ______________________________<wbr>_________________<br>
>> Devel mailing list<br>
>> <a href="mailto:Devel@ovirt.org" target="_blank">Devel@ovirt.org</a><br>
>> <a href="http://lists.ovirt.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/devel</a><br>
<br>
<br>
<br>
</div></div><span class="m_6363799937922607767gmail-HOEnZb"><font color="#888888">--<br>
Didi<br>
</font></span></blockquote></div></div></div><span class="HOEnZb"><font color="#888888"><br><br clear="all"><br>-- <br><div class="m_6363799937922607767gmail_signature">Didi<br></div>
</font></span></div></div>
<br>______________________________<wbr>_________________<br>
Devel mailing list<br>
<a href="mailto:Devel@ovirt.org">Devel@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/devel" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/devel</a><br></blockquote></div><br></div>