<div dir="ltr"><div>Adding some people from infra with mock knowledge, </div><div>If mock doesn't support it then we have an issue, because we can't get rid of mock until we'll have stateless slaves which will take some time.</div><div>Another option is to run this test in Lago instead of unit-tests. </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Sep 25, 2016 at 5:08 PM, Nir Soffer <span dir="ltr"><<a href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi all,<div><br></div><div>I added couple of new lvm tests, creating a vg based on a loop</div><div>device and performing several operations on the lvs.</div><div><br></div><div>All the test pass on my laptop (fedora 24, no special configuration)</div><div>and my rhel host (7.3 beta, configured for vdsm).</div><div><br></div><div>On the CI all the tests fail with the error bellow.</div><div><br></div><div>It seems that the first issue is having use_lvmetad = 1 in /etc/lvm/lvm.conf</div><div>but lvm2-lvmetad.socket is broken.</div><div><br></div><div>The code try to use pvscan --cache only if</div><div><br></div><div> systemctl status lvm2-lvmetad.socket</div><div><br></div><div>succeeds - but later we get:</div><div><br></div><div><pre style="white-space:pre-wrap;word-wrap:break-word;margin-top:0px;margin-bottom:0px;color:rgb(51,51,51);font-size:16px">Error: Command ('pvscan', '--cache') failed with rc=5 out='' err=' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n Cannot proceed since lvmetad is not active.\n'</pre></div><div><br></div><div>This error currently hides previous failure:</div><div><br></div><div><pre style="white-space:pre-wrap;word-wrap:break-word;margin-top:0px;margin-bottom:0px;color:rgb(51,51,51);font-size:16px"><span><b>07:04:30</b> </span>2016-09-25 07:02:55,632 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 lvcreate -n ovirt-lv-1 -L 128m ovirt-vg (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,714 DEBUG [root] (MainThread) FAILED: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n /dev/ovirt-vg/ovirt-lv-1: not found: device not cleared\n Aborting. Failed to wipe start of new LV.\n'; <rc> = 5
</pre></div><div><br></div><div>But it looks like they are related.</div><div><br></div><div>Is it possible to change the ci configuration? do we have similar test</div><div>running on the ci?</div><div><br></div><div>See <a href="https://gerrit.ovirt.org/#/c/64367/" target="_blank">https://gerrit.ovirt.org/#/c/<wbr>64367/</a></div><div>and the next patches in this topic.</div><div><br></div><div>Here are al the tests, all fail in the CI in the same way:</div><div><a href="https://gerrit.ovirt.org/#/c/64370/3/tests/storage_lvm_test.py" target="_blank">https://gerrit.ovirt.org/#/c/<wbr>64370/3/tests/storage_lvm_<wbr>test.py</a></div><div><br></div><div>I can filter the tests on the ci, but I like them to run automatically.</div><div><br></div><div>Thanks,</div><div>Nir</div><div><br></div><div><pre style="white-space:pre-wrap;word-wrap:break-word;margin-top:0px;margin-bottom:0px;color:rgb(51,51,51);font-size:16px"><span><b>07:04:30</b> </span>==============================<wbr>==============================<wbr>==========
<span><b>07:04:30</b> </span>ERROR: test_deactivate_unused_ovirt_<wbr>lvs (storage_lvm_test.<wbr>TestDeactivation)
<span><b>07:04:30</b> </span>------------------------------<wbr>------------------------------<wbr>----------
<span><b>07:04:30</b> </span>Traceback (most recent call last):
<span><b>07:04:30</b> </span> File "/home/jenkins/workspace/vdsm_<wbr>master_check-patch-fc24-x86_<wbr>64/vdsm/tests/testValidation.<wbr>py", line 97, in wrapper
<span><b>07:04:30</b> </span> return f(*args, **kwargs)
<span><b>07:04:30</b> </span> File "/home/jenkins/workspace/vdsm_<wbr>master_check-patch-fc24-x86_<wbr>64/vdsm/tests/storage_lvm_<wbr>test.py", line 74, in test_deactivate_unused_ovirt_<wbr>lvs
<span><b>07:04:30</b> </span> run("vgchange", "-an", "ovirt-vg")
<span><b>07:04:30</b> </span> File "/usr/lib64/python2.7/<wbr>contextlib.py", line 35, in __exit__
<span><b>07:04:30</b> </span> self.gen.throw(type, value, traceback)
<span><b>07:04:30</b> </span> File "/home/jenkins/workspace/vdsm_<wbr>master_check-patch-fc24-x86_<wbr>64/vdsm/tests/storage_lvm_<wbr>test.py", line 129, in fake_env
<span><b>07:04:30</b> </span> run("pvscan", "--cache")
<span><b>07:04:30</b> </span> File "/home/jenkins/workspace/vdsm_<wbr>master_check-patch-fc24-x86_<wbr>64/vdsm/tests/storage_lvm_<wbr>test.py", line 87, in run
<span><b>07:04:30</b> </span> raise cmdutils.Error(cmd, rc, out, err)
<span><b>07:04:30</b> </span>Error: Command ('pvscan', '--cache') failed with rc=5 out='' err=' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n Cannot proceed since lvmetad is not active.\n'
<span><b>07:04:30</b> </span>-------------------- >> begin captured logging << --------------------
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,386 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 losetup --find --show /var/tmp/tmpRGpRw9/backing_<wbr>file (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,400 DEBUG [root] (MainThread) SUCCESS: <err> = ''; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,400 DEBUG [test] (MainThread) Using loop device /dev/loop0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,401 DEBUG [test] (MainThread) Creating ovirt lvs
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,401 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 pvcreate -ff /dev/loop0 (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,495 DEBUG [root] (MainThread) SUCCESS: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n'; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,495 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 vgcreate ovirt-vg /dev/loop0 (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,589 DEBUG [root] (MainThread) SUCCESS: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n'; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,589 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 vgchange --addtag RHAT_storage_domain (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,631 DEBUG [root] (MainThread) SUCCESS: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n'; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,632 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 lvcreate -n ovirt-lv-1 -L 128m ovirt-vg (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,714 DEBUG [root] (MainThread) FAILED: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n /dev/ovirt-vg/ovirt-lv-1: not found: device not cleared\n Aborting. Failed to wipe start of new LV.\n'; <rc> = 5
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,714 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 losetup --detach /dev/loop0 (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,734 DEBUG [root] (MainThread) SUCCESS: <err> = ''; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,735 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 /sbin/udevadm settle --timeout=5 (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,746 DEBUG [root] (MainThread) SUCCESS: <err> = ''; <rc> = 0
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,746 DEBUG [root] (MainThread) /usr/bin/taskset --cpu-list 0-3 pvscan --cache (cwd None)
<span><b>07:04:30</b> </span>2016-09-25 07:02:55,758 DEBUG [root] (MainThread) FAILED: <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n Cannot proceed since lvmetad is not active.\n'; <rc> = 5
<span><b>07:04:43</b> </span>--------------------- >> end captured logging << ---------------------
</pre></div><div><br></div></div>
<br>______________________________<wbr>_________________<br>
Infra mailing list<br>
<a href="mailto:Infra@ovirt.org">Infra@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/infra</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>Eyal Edri<br>Associate Manager</div><div>RHV DevOps<br>EMEA ENG Virtualization R&D<br>Red Hat Israel<br><br>phone: +972-9-7692018<br>irc: eedri (on #tlv #rhev-dev #rhev-integ)</div></div></div></div></div></div></div>
</div>