Adding some people from infra with mock knowledge,
If mock doesn't support it then we have an issue, because we can't get rid
of mock until we'll have stateless slaves which will take some time.
Another option is to run this test in Lago instead of unit-tests.
These lvm tests are actually integration tests, testing the behavior of the
kernel, device mapper and lvm in specific configuration. Knowing that
what vdsm does in this case is still working is very valuable.
These tests should not run for each patch - they are relevant only to
the lvm module, so developers can run them manually. It would be
nice if we could run these tests daily or weekly to catch regressions
in system.
Running these tests in a throwaway vm can be a good idea to have
a clean reproducible environment.
I think we have the same issue with a lot of networking tests that
currently fail in various ways in mock.
Nir
On Sun, Sep 25, 2016 at 5:08 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> Hi all,
>
> I added couple of new lvm tests, creating a vg based on a loop
> device and performing several operations on the lvs.
>
> All the test pass on my laptop (fedora 24, no special configuration)
> and my rhel host (7.3 beta, configured for vdsm).
>
> On the CI all the tests fail with the error bellow.
>
> It seems that the first issue is having use_lvmetad = 1 in
> /etc/lvm/lvm.conf
> but lvm2-lvmetad.socket is broken.
>
> The code try to use pvscan --cache only if
>
> systemctl status lvm2-lvmetad.socket
>
> succeeds - but later we get:
>
> Error: Command ('pvscan', '--cache') failed with rc=5 out=''
err='
> /run/lvm/lvmetad.socket: connect failed: No such file or directory\n
> WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n
> Cannot proceed since lvmetad is not active.\n'
>
>
> This error currently hides previous failure:
>
> 07:04:30 2016-09-25 07:02:55,632 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 lvcreate -n ovirt-lv-1 -L 128m ovirt-vg (cwd
> None)
> 07:04:30 2016-09-25 07:02:55,714 DEBUG [root] (MainThread) FAILED: <err>
> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n
> WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n
> /dev/ovirt-vg/ovirt-lv-1: not found: device not cleared\n Aborting. Failed
> to wipe start of new LV.\n'; <rc> = 5
>
>
> But it looks like they are related.
>
> Is it possible to change the ci configuration? do we have similar test
> running on the ci?
>
> See
https://gerrit.ovirt.org/#/c/64367/
> and the next patches in this topic.
>
> Here are al the tests, all fail in the CI in the same way:
>
https://gerrit.ovirt.org/#/c/64370/3/tests/storage_lvm_test.py
>
> I can filter the tests on the ci, but I like them to run automatically.
>
> Thanks,
> Nir
>
> 07:04:30
> ======================================================================
> 07:04:30 ERROR: test_deactivate_unused_ovirt_lvs
> (storage_lvm_test.TestDeactivation)
> 07:04:30
> ----------------------------------------------------------------------
> 07:04:30 Traceback (most recent call last):
> 07:04:30 File
>
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/testValidation.py",
> line 97, in wrapper
> 07:04:30 return f(*args, **kwargs)
> 07:04:30 File
>
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/storage_lvm_test.py",
> line 74, in test_deactivate_unused_ovirt_lvs
> 07:04:30 run("vgchange", "-an", "ovirt-vg")
> 07:04:30 File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__
> 07:04:30 self.gen.throw(type, value, traceback)
> 07:04:30 File
>
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/storage_lvm_test.py",
> line 129, in fake_env
> 07:04:30 run("pvscan", "--cache")
> 07:04:30 File
>
"/home/jenkins/workspace/vdsm_master_check-patch-fc24-x86_64/vdsm/tests/storage_lvm_test.py",
> line 87, in run
> 07:04:30 raise cmdutils.Error(cmd, rc, out, err)
> 07:04:30 Error: Command ('pvscan', '--cache') failed with rc=5
out=''
> err=' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n
> WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n
> Cannot proceed since lvmetad is not active.\n'
> 07:04:30 -------------------- >> begin captured logging <<
> --------------------
> 07:04:30 2016-09-25 07:02:55,386 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 losetup --find --show
> /var/tmp/tmpRGpRw9/backing_file (cwd None)
> 07:04:30 2016-09-25 07:02:55,400 DEBUG [root] (MainThread) SUCCESS:
> <err> = ''; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,400 DEBUG [test] (MainThread) Using loop
> device /dev/loop0
> 07:04:30 2016-09-25 07:02:55,401 DEBUG [test] (MainThread) Creating
> ovirt lvs
> 07:04:30 2016-09-25 07:02:55,401 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 pvcreate -ff /dev/loop0 (cwd None)
> 07:04:30 2016-09-25 07:02:55,495 DEBUG [root] (MainThread) SUCCESS:
> <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or
> directory\n WARNING: Failed to connect to lvmetad. Falling back to internal
> scanning.\n'; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,495 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 vgcreate ovirt-vg /dev/loop0 (cwd None)
> 07:04:30 2016-09-25 07:02:55,589 DEBUG [root] (MainThread) SUCCESS:
> <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or
> directory\n WARNING: Failed to connect to lvmetad. Falling back to internal
> scanning.\n'; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,589 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 vgchange --addtag RHAT_storage_domain (cwd
> None)
> 07:04:30 2016-09-25 07:02:55,631 DEBUG [root] (MainThread) SUCCESS:
> <err> = ' /run/lvm/lvmetad.socket: connect failed: No such file or
> directory\n WARNING: Failed to connect to lvmetad. Falling back to internal
> scanning.\n'; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,632 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 lvcreate -n ovirt-lv-1 -L 128m ovirt-vg (cwd
> None)
> 07:04:30 2016-09-25 07:02:55,714 DEBUG [root] (MainThread) FAILED: <err>
> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n
> WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n
> /dev/ovirt-vg/ovirt-lv-1: not found: device not cleared\n Aborting. Failed
> to wipe start of new LV.\n'; <rc> = 5
> 07:04:30 2016-09-25 07:02:55,714 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 losetup --detach /dev/loop0 (cwd None)
> 07:04:30 2016-09-25 07:02:55,734 DEBUG [root] (MainThread) SUCCESS:
> <err> = ''; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,735 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 /sbin/udevadm settle --timeout=5 (cwd None)
> 07:04:30 2016-09-25 07:02:55,746 DEBUG [root] (MainThread) SUCCESS:
> <err> = ''; <rc> = 0
> 07:04:30 2016-09-25 07:02:55,746 DEBUG [root] (MainThread)
> /usr/bin/taskset --cpu-list 0-3 pvscan --cache (cwd None)
> 07:04:30 2016-09-25 07:02:55,758 DEBUG [root] (MainThread) FAILED: <err>
> = ' /run/lvm/lvmetad.socket: connect failed: No such file or directory\n
> WARNING: Failed to connect to lvmetad. Falling back to internal scanning.\n
> Cannot proceed since lvmetad is not active.\n'; <rc> = 5
> 07:04:43 --------------------- >> end captured logging <<
> ---------------------
>
>
>
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/infra
>
--
Eyal Edri
Associate Manager
RHV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)