<div dir="ltr">Gal and Daniel are looking into it, strange its not affecting all suites.</div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Mar 19, 2018 at 2:11 PM, Dominik Holler <span dir="ltr"><<a href="mailto:dholler@redhat.com" target="_blank">dholler@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Looks like /dev/shm is run out of space.<br>
<div><div class="h5"><br>
On Mon, 19 Mar 2018 13:33:28 +0200<br>
Leon Goldberg <<a href="mailto:lgoldber@redhat.com">lgoldber@redhat.com</a>> wrote:<br>
<br>
> Hey, any updates?<br>
><br>
> On Sun, Mar 18, 2018 at 10:44 AM, Edward Haas <<a href="mailto:ehaas@redhat.com">ehaas@redhat.com</a>><br>
> wrote:<br>
><br>
> > We are doing nothing special there, just executing ansible through<br>
> > their API.<br>
> ><br>
> > On Sun, Mar 18, 2018 at 10:42 AM, Daniel Belenky<br>
> > <<a href="mailto:dbelenky@redhat.com">dbelenky@redhat.com</a>> wrote:<br>
> ><br>
> >> It's not a space issue. Other suites ran on that slave after your<br>
> >> suite successfully.<br>
> >> I think that the problem is the setting for max semaphores, though<br>
> >> I don't know what you're doing to reach that limit.<br>
> >><br>
> >> [dbelenky@ovirt-srv18 ~]$ ipcs -ls<br>
> >><br>
> >> ------ Semaphore Limits --------<br>
> >> max number of arrays = 128<br>
> >> max semaphores per array = 250<br>
> >> max semaphores system wide = 32000<br>
> >> max ops per semop call = 32<br>
> >> semaphore max value = 32767<br>
> >><br>
> >><br>
> >> On Sun, Mar 18, 2018 at 10:31 AM, Edward Haas <<a href="mailto:ehaas@redhat.com">ehaas@redhat.com</a>><br>
> >> wrote:<br>
> >>> <a href="http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>ovirt-system-tests_network-<wbr>suite-master/</a><br>
> >>><br>
> >>> On Sun, Mar 18, 2018 at 10:24 AM, Daniel Belenky<br>
> >>> <<a href="mailto:dbelenky@redhat.com">dbelenky@redhat.com</a>> wrote:<br>
> >>><br>
> >>>> Hi Edi,<br>
> >>>><br>
> >>>> Are there any logs? where you're running the suite? may I have a<br>
> >>>> link?<br>
> >>>><br>
> >>>> On Sun, Mar 18, 2018 at 8:20 AM, Edward Haas <<a href="mailto:ehaas@redhat.com">ehaas@redhat.com</a>><br>
> >>>> wrote:<br>
> >>>>> Good morning,<br>
> >>>>><br>
> >>>>> We are running in the OST network suite a test module with<br>
> >>>>> Ansible and it started failing during the weekend on "OSError:<br>
> >>>>> [Errno 28] No space left on device" when attempting to take a<br>
> >>>>> lock in the mutiprocessing python module.<br>
> >>>>><br>
> >>>>> It smells like a slave resource problem, could someone help<br>
> >>>>> investigate this?<br>
> >>>>><br>
> >>>>> Thanks,<br>
> >>>>> Edy.<br>
> >>>>><br>
> >>>>> ==============================<wbr>===== FAILURES<br>
> >>>>> ==============================<wbr>===== ______________________<br>
> >>>>> test_ovn_provider_create_<wbr>scenario _______________________<br>
> >>>>><br>
> >>>>> os_client_config = None<br>
> >>>>><br>
> >>>>> def test_ovn_provider_create_<wbr>scenario(os_client_config):<br>
> >>>>> > _test_ovn_provider('create_<wbr>scenario.yml')<br>
> >>>>><br>
> >>>>> network-suite-master/tests/<wbr>test_ovn_provider.py:68:<br>
> >>>>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br>
> >>>>> _ _ _ _ _ _ _ _<br>
> >>>>> network-suite-master/tests/<wbr>test_ovn_provider.py:78: in<br>
> >>>>> _test_ovn_provider playbook.run()<br>
> >>>>> network-suite-master/lib/<wbr>ansiblelib.py:127: in run<br>
> >>>>> self._run_playbook_executor()<br>
> >>>>> network-suite-master/lib/<wbr>ansiblelib.py:138: in<br>
> >>>>> _run_playbook_executor pbex =<br>
> >>>>> PlaybookExecutor(**self._pbex_<wbr>args) /usr/lib/python2.7/site-<wbr>packages/ansible/executor/<wbr>playbook_executor.py:60:<br>
> >>>>> in __init__ self._tqm = TaskQueueManager(inventory=<wbr>inventory,<br>
> >>>>> variable_manager=variable_<wbr>manager, loader=loader,<br>
> >>>>> options=options,<br>
> >>>>> passwords=self.passwords) /usr/lib/python2.7/site-<wbr>packages/ansible/executor/<wbr>task_queue_manager.py:104:<br>
> >>>>> in __init__ self._final_q =<br>
> >>>>> multiprocessing.Queue() /usr/lib64/python2.7/<wbr>multiprocessing/__init__.py:<wbr>218:<br>
> >>>>> in Queue return<br>
> >>>>> Queue(maxsize) /usr/lib64/python2.7/<wbr>multiprocessing/queues.py:63:<br>
> >>>>> in __init__ self._rlock =<br>
> >>>>> Lock() /usr/lib64/python2.7/<wbr>multiprocessing/synchronize.<wbr>py:147:<br>
> >>>>> in __init__ SemLock.__init__(self, SEMAPHORE, 1, 1) _ _ _ _ _ _<br>
> >>>>> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _<br>
> >>>>> _ _<br>
> >>>>><br>
> >>>>> self = <Lock(owner=unknown)>, kind = 1, value = 1, maxvalue = 1<br>
> >>>>><br>
> >>>>> def __init__(self, kind, value, maxvalue):<br>
> >>>>> > sl = self._semlock = _multiprocessing.SemLock(kind,<br>
> >>>>> > value, maxvalue)<br>
> >>>>> E OSError: [Errno 28] No space left on device<br>
> >>>>><br>
> >>>>> /usr/lib64/python2.7/<wbr>multiprocessing/synchronize.<wbr>py:75: OSError<br>
> >>>>><br>
> >>>>><br>
> >>>><br>
> >>>><br>
> >>>> --<br>
> >>>><br>
> >>>> DANIEL BELENKY<br>
> >>>><br>
> >>>> RHV DEVOPS<br>
> >>>><br>
> >>><br>
> >>><br>
> >><br>
> >><br>
> >> --<br>
> >><br>
> >> DANIEL BELENKY<br>
> >><br>
> >> RHV DEVOPS<br>
> >><br>
> ><br>
> ><br>
<br>
</div></div>______________________________<wbr>_________________<br>
Infra mailing list<br>
<a href="mailto:Infra@ovirt.org">Infra@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/infra</a><br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><p style="font-family:overpass,sans-serif;margin:0px;padding:0px;font-size:14px;text-transform:uppercase;font-weight:bold"><font color="#cc0000">Eyal edri</font></p><p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-weight:bold;margin:0px;padding:0px;font-size:14px;text-transform:uppercase"><br></p><p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase">MANAGER</p><p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase">RHV DevOps</p><p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase">EMEA VIRTUALIZATION R&D</p><p style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:10px;margin:0px 0px 4px;text-transform:uppercase"><br></p><p style="font-family:overpass,sans-serif;margin:0px;font-size:10px;color:rgb(153,153,153)"><a href="https://www.redhat.com/" style="color:rgb(0,136,206);margin:0px" target="_blank">Red Hat EMEA</a></p><table border="0" style="color:rgb(0,0,0);font-family:overpass,sans-serif;font-size:medium"><tbody><tr><td width="100px"><a href="https://red.ht/sig" style="color:rgb(17,85,204)" target="_blank"><img src="https://www.redhat.com/profiles/rh/themes/redhatdotcom/img/logo-red-hat-black.png" width="90" height="auto"></a></td><td style="font-size:10px"><a href="https://redhat.com/trusted" style="color:rgb(204,0,0);font-weight:bold" target="_blank">TRIED. TESTED. TRUSTED.</a></td></tr></tbody></table></div><div>phone: +972-9-7692018<br>irc: eedri (on #tlv #rhev-dev #rhev-integ)</div></div></div></div></div></div></div></div></div></div></div>
</div>