Hi all,
On Mon, Oct 12, 2020 at 5:17 AM <jenkins(a)jenkins.phx.ovirt.org> wrote:
Above failed with:
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
vdsm.log has:
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
2020-10-11 22:05:14,695-0400 INFO (jsonrpc/1) [api.host] FINISH
getJobs return={'jobs': {'05eaea44-7e4c-4442-9926-2bcb696520f1':
{'id': '05eaea44-7e4c-4442-9926-2bcb696520f1', 'status':
'failed',
'description': 'sparsify_volume', 'job_type': 'storage',
'error':
{'code': 100, 'message': 'General Exception: (\'Command
[\\\'/usr/bin/virt-sparsify\\\', \\\'--machine-readable\\\',
\\\'--in-place\\\',
\\\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67\\\']
failed with rc=1 out=b\\\'3/12\\\\n{ "message": "libguestfs error:
guestfs_launch failed.\\\\\\\\nThis usually means the libguestfs
appliance failed to start or crashed.\\\\\\\\nDo:\\\\\\\\n export
LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\\\\\\\\nand run the command
again. For further information, read:\\\\\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\\\\\nYou
can also run \\\\\\\'libguestfs-test-tool\\\\\\\' and post the
*complete* output\\\\\\\\ninto a bug report or message to the
libguestfs mailing list.", "timestamp":
"2020-10-11T22:05:08.397538670-04:00", "type": "error"
}\\\\n\\\'
err=b"virt-sparsify: error: libguestfs error: guestfs_launch
failed.\\\\nThis usually means the libguestfs appliance failed to
start or crashed.\\\\nDo:\\\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\\\nand run the command again. For further
information, read:\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\nYou
can also run \\\'libguestfs-test-tool\\\' and post the *complete*
output\\\\ninto a bug report or message to the libguestfs mailing
list.\\\\n\\\\nIf reporting bugs, run virt-sparsify with debugging
enabled and include the \\\\ncomplete output:\\\\n\\\\n virt-sparsify
-v -x [...]\\\\n"\',)'}}}, 'status': {'code': 0,
'message': 'Done'}}
from=::ffff:192.168.201.4,43318,
flow_id=365642f4-2fe2-45df-937a-f4ca435eea38 (api:54)
2020-10-11 22:05:14,695-0400 DEBUG (jsonrpc/1) [jsonrpc.JsonRpcServer]
Return 'Host.getJobs' in bridge with
{'05eaea44-7e4c-4442-9926-2bcb696520f1': {'id':
'05eaea44-7e4c-4442-9926-2bcb696520f1', 'status': 'failed',
'description': 'sparsify_volume', 'job_type': 'storage',
'error':
{'code': 100, 'message': 'General Exception: (\'Command
[\\\'/usr/bin/virt-sparsify\\\', \\\'--machine-readable\\\',
\\\'--in-place\\\',
\\\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67\\\']
failed with rc=1 out=b\\\'3/12\\\\n{ "message": "libguestfs error:
guestfs_launch failed.\\\\\\\\nThis usually means the libguestfs
appliance failed to start or crashed.\\\\\\\\nDo:\\\\\\\\n export
LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1\\\\\\\\nand run the command
again. For further information, read:\\\\\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\\\\\nYou
can also run \\\\\\\'libguestfs-test-tool\\\\\\\' and post the
*complete* output\\\\\\\\ninto a bug report or message to the
libguestfs mailing list.", "timestamp":
"2020-10-11T22:05:08.397538670-04:00", "type": "error"
}\\\\n\\\'
err=b"virt-sparsify: error: libguestfs error: guestfs_launch
failed.\\\\nThis usually means the libguestfs appliance failed to
start or crashed.\\\\nDo:\\\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\\\nand run the command again. For further
information, read:\\\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\\\nYou
can also run \\\'libguestfs-test-tool\\\' and post the *complete*
output\\\\ninto a bug report or message to the libguestfs mailing
list.\\\\n\\\\nIf reporting bugs, run virt-sparsify with debugging
enabled and include the \\\\ncomplete output:\\\\n\\\\n virt-sparsify
-v -x [...]\\\\n"\',)'}}} (__init__:356)
/var/log/messages has:
Oct 11 22:04:51 lago-basic-suite-master-host-0 kvm[80601]: 1 guest now active
Oct 11 22:05:06 lago-basic-suite-master-host-0 journal[80557]: Domain
id=1 name='guestfs-hl0ntvn92rtkk2u0'
uuid=05ea5a53-562f-49f8-a8ca-76b45c5325b4 is tainted: custom-argv
Oct 11 22:05:06 lago-basic-suite-master-host-0 journal[80557]: Domain
id=1 name='guestfs-hl0ntvn92rtkk2u0'
uuid=05ea5a53-562f-49f8-a8ca-76b45c5325b4 is tainted: host-cpu
Oct 11 22:05:06 lago-basic-suite-master-host-0 kvm[80801]: 2 guests now active
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]:
internal error: End of file from qemu monitor
Oct 11 22:05:08 lago-basic-suite-master-host-0 kvm[80807]: 1 guest now active
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
resolve symlink /tmp/libguestfseTG8xF/console.sock: No such file or
directoryKN<F3>L^?
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
resolve symlink /tmp/libguestfseTG8xF/guestfsd.sock: No such file or
directoryKN<F3>L^?
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for /tmp/libguestfs1WkcF7/overlay1.qcow2
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for
/var/tmp/.guestfs-36/appliance.d/kernel
Oct 11 22:05:08 lago-basic-suite-master-host-0 journal[80557]: cannot
lookup default selinux label for
/var/tmp/.guestfs-36/appliance.d/initrd
Oct 11 22:05:08 lago-basic-suite-master-host-0 vdsm[74096]: ERROR Job
'05eaea44-7e4c-4442-9926-2bcb696520f1' failed#012Traceback (most
recent call last):#012 File
"/usr/lib/python3.6/site-packages/vdsm/jobs.py", line 159, in run#012
self._run()#012 File
"/usr/lib/python3.6/site-packages/vdsm/storage/sdm/api/sparsify_volume.py",
line 57, in _run#012
virtsparsify.sparsify_inplace(self._vol_info.path)#012 File
"/usr/lib/python3.6/site-packages/vdsm/virtsparsify.py", line 40, in
sparsify_inplace#012 commands.run(cmd)#012 File
"/usr/lib/python3.6/site-packages/vdsm/common/commands.py", line 101,
in run#012 raise cmdutils.Error(args, p.returncode, out,
err)#012vdsm.common.cmdutils.Error: Command ['/usr/bin/virt-sparsify',
'--machine-readable', '--in-place',
'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_share1/8b292c13-fd8a-4a7c-903c-5724ec742c10/images/a367c179-2ac9-4930-abeb-848229f81c97/515fcf06-8743-45d1-9af8-61a0c48e8c67']
failed with rc=1 out=b'3/12\n{ "message": "libguestfs error:
guestfs_launch failed.\\nThis usually means the libguestfs appliance
failed to start or crashed.\\nDo:\\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\\nand run the command again. For further
information, read:\\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\\nYou
can also run \'libguestfs-test-tool\' and post the *complete*
output\\ninto a bug report or message to the libguestfs mailing
list.", "timestamp": "2020-10-11T22:05:08.397538670-04:00",
"type":
"error" }\n' err=b"virt-sparsify: error: libguestfs error:
guestfs_launch failed.\nThis usually means the libguestfs appliance
failed to start or crashed.\nDo:\n export LIBGUESTFS_DEBUG=1
LIBGUESTFS_TRACE=1\nand run the command again. For further
information, read:\n
http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs\nYou can
also run 'libguestfs-test-tool' and post the *complete* output\ninto a
bug report or message to the libguestfs mailing list.\n\nIf reporting
bugs, run virt-sparsify with debugging enabled and include the
\ncomplete output:\n\n virt-sparsify -v -x [...]\n"
The next run of the job (480) did finish successfully. No idea if it
was already fixed by a patch, or is simply a random/env issue.
Is it possible to pass LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1 without
patching vdsm? Not sure. In any case, if this does not cause too much
excess debug logging, perhaps better always pass it, to help
retroactively analyze such failures. Or, patch
virt-sparsify/libguestfs/whatever to always log at least enough
information on failure even without passing these.
Best regards,
Build Number: 479
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #479
[hbraha] network: bond active slave test
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: basic-suite-master.test-scenarios.004_basic_sanity.test_sparsify_disk1
Error Message:
AssertionError: False != True after 600 seconds
Stack Trace:
api_v4 = <ovirtsdk4.Connection object at 0x7fe717c60e50>
@order_by(_TEST_LIST)
def test_sparsify_disk1(api_v4):
engine = api_v4.system_service()
disk_service = test_utils.get_disk_service(engine, DISK1_NAME)
with test_utils.TestEvent(engine, 1325): # USER_SPARSIFY_IMAGE_START event
disk_service.sparsify()
with test_utils.TestEvent(engine, 1326): # USER_SPARSIFY_IMAGE_FINISH_SUCCESS
> pass
../basic-suite-master/test-scenarios/004_basic_sanity.py:295:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib64/python2.7/contextlib.py:24: in __exit__
self.gen.next()
../ost_utils/ost_utils/engine_utils.py:44: in wait_for_event
lambda:
../ost_utils/ost_utils/assertions.py:97: in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
../ost_utils/ost_utils/assertions.py:82: in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
func = <function <lambda> at 0x7fe7176dfb18>, value = True, timeout = 600
allowed_exceptions = [], initial_wait = 0
error_message = 'False != True after 600 seconds'
def assert_equals_within(
func, value, timeout, allowed_exceptions=None, initial_wait=10,
error_message=None
):
allowed_exceptions = allowed_exceptions or []
with _EggTimer(timeout) as timer:
while not timer.elapsed():
try:
res = func()
if res == value:
return
except Exception as exc:
if _instance_of_any(exc, allowed_exceptions):
time.sleep(3)
continue
LOGGER.exception("Unhandled exception in %s", func)
raise
if initial_wait == 0:
time.sleep(3)
else:
time.sleep(initial_wait)
initial_wait = 0
try:
if error_message is None:
error_message = '%s != %s after %s seconds' % (res, value,
timeout)
> raise AssertionError(error_message)
E AssertionError: False != True after 600 seconds
../ost_utils/ost_utils/assertions.py:60:
AssertionError_______________________________________________
Infra mailing list -- infra(a)ovirt.org
To unsubscribe send an email to infra-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/infra@ovirt.org/message/65FECSE7EBW...
--
Didi