<div dir="ltr"><div><div><div>Thanks Milan, <br><br></div>Can you post the fix that you added on the mail? <br><br></div>Thanks, <br></div>Dafna<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 28, 2018 at 9:23 AM, Milan Zamazal <span dir="ltr"><<a href="mailto:mzamazal@redhat.com" target="_blank">mzamazal@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Milan Zamazal <<a href="mailto:mzamazal@redhat.com">mzamazal@redhat.com</a>> writes:<br>
<br>
> Dafna Ron <<a href="mailto:dron@redhat.com">dron@redhat.com</a>> writes:<br>
><br>
>> We have a failure that seems to be random and happening in several<br>
>> projects.<br>
><br>
> Does this failure occur only recently or has it been present for ages?<br>
><br>
>> from what I can see, we are failing due to a timing issue in the test<br>
>> itself because we are querying the vm after its been destroyed in engine.<br>
>> looking at engine, I can see that the vm was actually shut down,<br>
><br>
> No, the VM was shut down in another test. It's already running again in<br>
> hotplug_cpu.<br>
><br>
>> I would like to disable this test until we can fix the issue since it<br>
>> already failed about 7 different patches from different projects.<br>
><br>
> I can see that Gal has already increased the timeout. I think the test<br>
> could be split to reduce the waiting delay, I'll post a patch for that.<br>
<br>
</span>BTW I think the primary cause of the trouble are the infamous Cirros<br>
networking recovery problems.<br>
</blockquote></div><br></div>