[VDSM] Fedora rawhide status
by Nir Soffer
We did not update the Fedora rawhide for few month since a package
was missing. Now that this issue was solved, we have new errors,
1. async is a keywoard in python 3.7
Compiling './tests/storage/fakesanlock.py'...
*** File "./tests/storage/fakesanlock.py", line 65
async=False):
^
SyntaxError: invalid syntax
We have many of these. The issue is sanlock api uses the kwarg "async", and
python 3.7 made this invalid syntax.
$ ./python
Python 3.7.0+ (heads/3.7:426135b674, Aug 9 2018, 22:50:16)
[GCC 8.1.1 20180712 (Red Hat 8.1.1-5)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def foo(async=False):
File "<stdin>", line 1
def foo(async=False):
^
SyntaxError: invalid syntax
>>> async = True
File "<stdin>", line 1
async = True
^
SyntaxError: invalid syntax
Thank you python developers for making our life more interesting :-)
So we will have to change sanlock python binding to replace "async"
with something else.
I'll file sanlock bug for this.
2. test_sourceroute_add_remove_and_read fails
No idea why it fails, hopefully Dan or Edward have a clue.
FAIL: test_sourceroute_add_remove_and_read
(network.sourceroute_test.TestSourceRoute)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/vdsm/tests/testValidation.py", line 193, in wrapper
return f(*args, **kwargs)
File "/vdsm/tests/network/sourceroute_test.py", line 80, in
test_sourceroute_add_remove_and_read
self.assertEqual(2, len(routes), routes)
AssertionError: 2 != 0
-------------------- >> begin captured logging << --------------------
2018-08-09 19:36:31,446 DEBUG (MainThread) [root] /sbin/ip link add
name dummy_UTpMy type dummy (cwd None) (cmdutils:151)
2018-08-09 19:36:31,455 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,456 DEBUG (netlink/events) [root] START thread
<Thread(netlink/events, started daemon 140332374083328)> (func=<bound
method Monitor._scan of <vdsm.network.netlink.monitor.Monitor object
at 0x7fa1b18dc6d0>>, args=(), kwargs={}) (concurrent:193)
2018-08-09 19:36:31,456 DEBUG (MainThread) [root] /sbin/ip link set
dev dummy_UTpMy up (cwd None) (cmdutils:151)
2018-08-09 19:36:31,463 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,464 DEBUG (netlink/events) [root] FINISH thread
<Thread(netlink/events, started daemon 140332374083328)>
(concurrent:196)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,471 DEBUG (MainThread) [root] Adding source route
for device dummy_UTpMy (sourceroute:195)
2018-08-09 19:36:31,472 DEBUG (MainThread) [root] /sbin/ip -4 route
add 0.0.0.0/0 via 192.168.99.2 dev dummy_UTpMy table 3232260865 (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,478 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,479 DEBUG (MainThread) [root] /sbin/ip -4 route
add 192.168.99.0/29 via 192.168.99.1 dev dummy_UTpMy table 3232260865
(cwd None) (cmdutils:151)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,485 DEBUG (MainThread) [root] /sbin/ip rule add
from 192.168.99.0/29 prio 32000 table 3232260865 (cwd None)
(cmdutils:151)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,492 DEBUG (MainThread) [root] /sbin/ip rule add
from all to 192.168.99.0/29 dev dummy_UTpMy prio 32000 table
3232260865 (cwd None) (cmdutils:151)
2018-08-09 19:36:31,498 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,499 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,505 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865 (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865 (iproute2:60)
2018-08-09 19:36:31,505 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865 (iproute2:60)
2018-08-09 19:36:31,506 DEBUG (MainThread) [root] /sbin/ip rule (cwd
None) (cmdutils:151)
2018-08-09 19:36:31,512 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865 (iproute2:60)
2018-08-09 19:36:31,512 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_d3SHQ [detached]
lookup 3232260865 (iproute2:60)
2018-08-09 19:36:31,513 WARNING (MainThread) [root] Could not parse
rule 32000: from all to 192.168.99.0 /29 iif dummy_UTpMy lookup
3232260865 (iproute2:60)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] Removing source
route for device dummy_UTpMy (sourceroute:215)
2018-08-09 19:36:31,513 DEBUG (MainThread) [root] /sbin/ip link del
dev dummy_UTpMy (cwd None) (cmdutils:151)
2018-08-09 19:36:31,532 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (cmdutils:159)
3. qemu-io returns now non-zero code on failures, breaking our tests
This command use to return 0 on failures, and the only way to detect
failures was
to check the output. Now it was fixed to return non-zero code, and this
breaks our
tests.
I think this should be an easy fix.
Here is an example:
1m_____________________________ test_no_match[qcow2]
_____________________________
tmpdir = local('/var/tmp/vdsm/test_no_match_qcow2_0'), image_format = 'qcow2'
def test_no_match(tmpdir, image_format):
path = str(tmpdir.join('test.' + image_format))
op = qemuimg.create(path, '1m', image_format)
op.run()
qemuio.write_pattern(path, image_format, pattern=2)
with pytest.raises(qemuio.VerificationError):
> qemuio.verify_pattern(path, image_format, pattern=4)
storage/qemuio_test.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = '/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2', format = 'qcow2'
offset = 512, len = 1024, pattern = 4
def verify_pattern(path, format, offset=512, len=1024, pattern=5):
read_cmd = 'read -P %d -s 0 -l %d %d %d' % (pattern, len, offset, len)
cmd = ['qemu-io', '-f', format, '-c', read_cmd, path]
rc, out, err = commands.execCmd(cmd, raw=True)
if rc != 0 or err != b"":
> raise cmdutils.Error(cmd, rc, out, err)
E Error: Command ['qemu-io', '-f', 'qcow2', '-c', 'read -P 4
-s 0 -l 1024 512 1024',
'/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2'] failed with rc=1
out='Pattern verification failed at offset 512, 1024 bytes\nread
1024/1024 bytes at offset 512\n1 KiB, 1 ops; 0.0001 sec (7.570 MiB/sec
and 7751.9380 ops/sec)\n' err=''
storage/qemuio.py:50: Error
----------------------------- Captured stderr call -----------------------------
2018-08-09 19:37:32,126 DEBUG (MainThread) [storage.operation]
/usr/bin/taskset --cpu-list 0-1 /usr/bin/nice -n 19 /usr/bin/ionice -c
3 /usr/bin/qemu-img create -f qcow2 -o compat=0.10
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 1m (cwd None)
(operation:150)
2018-08-09 19:37:32,148 DEBUG (MainThread) [storage.operation]
SUCCESS: <err> = ''; <rc> = 0 (operation:169)
2018-08-09 19:37:32,148 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 qemu-io -f qcow2 -c 'write -P 2 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
(commands:66)
2018-08-09 19:37:32,170 DEBUG (MainThread) [root] SUCCESS: <err> = '';
<rc> = 0 (commands:87)
2018-08-09 19:37:32,171 DEBUG (MainThread) [root] /usr/bin/taskset
--cpu-list 0-1 qemu-io -f qcow2 -c 'read -P 4 -s 0 -l 1024 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
(commands:66)
2018-08-09 19:37:32,181 DEBUG (MainThread) [root] FAILED: <err> = '';
<rc> = 1 (commands:87)
------------------------------ Captured log call -------------------------------
operation.py 150 DEBUG /usr/bin/taskset --cpu-list
0-1 /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/qemu-img create
-f qcow2 -o compat=0.10 /var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2
1m (cwd None)
operation.py 169 DEBUG SUCCESS: <err> = ''; <rc> = 0
commands.py 66 DEBUG /usr/bin/taskset --cpu-list
0-1 qemu-io -f qcow2 -c 'write -P 2 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
commands.py 87 DEBUG SUCCESS: <err> = ''; <rc> = 0
commands.py 66 DEBUG /usr/bin/taskset --cpu-list
0-1 qemu-io -f qcow2 -c 'read -P 4 -s 0 -l 1024 512 1024'
/var/tmp/vdsm/test_no_match_qcow2_0/test.qcow2 (cwd None)
commands.py 87 DEBUG FAILED: <err> = ''; <rc> = 1
4. pywatch tests still fail
Looks like our on_fedora() helper is broken, so this test is not marked as
xfail.
______________________ TestPyWatch.test_timeout_backtrace ______________________
self = <pywatch_test.TestPyWatch object at 0x7f5cf42899d0>
@pytest.mark.xfail(on_fedora(), reason="py-bt is broken on Fedora 27")
@pytest.mark.xfail(on_ovirt_ci(),
reason="py-bt randomly unavailable on EL7 nodes")
def test_timeout_backtrace(self):
script = '''
import time
def outer():
inner()
def inner():
time.sleep(10)
outer()
'''
rc, out, err = exec_cmd(['./py-watch', '0.1', 'python', '-c', script])
> assert b'in inner ()' in out
E AssertionError: assert 'in inner ()' in
'=============================================================\n=
Watched process timed out ... Terminating
watched process
=\n=============================================================\n'
pywatch_test.py:74: AssertionError
------------------------------ Captured log call -------------------------------
cmdutils.py 151 DEBUG ./py-watch 0.1 python -c '
import time
def outer():
inner()
def inner():
time.sleep(10)
outer()
' (cwd None)
cmdutils.py 159 DEBUG FAILED: <err> = '\nwarning:
Loadable section ".note.gnu.property" outside of ELF
segments\n\nwarning: Loadable section ".note.gnu.property" outside of
ELF segments\n'; <rc> = 143
5. ovs tests that pass on Fedora 28 fail on rawhide
Not sure how why the tests pass on Fedora 28 and fail on CentOS and Fedora
rawhide.
________ ERROR at setup of TestOvsApiBase.test_execute_a_single_command ________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
with xfail_when_running_on_travis_with_centos():
> service.setup()
network/integration/ovs/conftest.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f79737589d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
------------------------------ Captured log setup ------------------------------
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py 159 DEBUG FAILED: <err> = ''; <rc> = 1
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl --system-id=random start (cwd
None)
cmdutils.py 159 DEBUG FAILED: <err> = 'modprobe:
FATAL: Module openvswitch not found in directory
/lib/modules/4.4.0-104-generic\nrmmod: ERROR: Module bridge is in use
by: br_netfilter\n'; <rc> = 1
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py 159 DEBUG FAILED: <err> = ''; <rc> = 1
Otherwise all tests pass, so we are in pretty good shape :-)
Nir
6 years, 3 months
Veritas: Image Transfer Finalize Call Failure
by Pavan Chavva
Hi Team,
Can anyone help answer this question?
Best,
Pavan.
---------- Forwarded message ---------
From: Ketan Pachpande <Ketan.Pachpande(a)veritas.com>
Date: Thu, Aug 9, 2018 at 9:44 AM
Subject: RE: [EXTERNAL] Updated invitation: RHV- Veritas Netbackup Weekly
Sync (Tentative) @ Weekly from 10am to 10:30am on Thursday (EDT) (
ketan.pachpande(a)veritas.com)
To: pchavva(a)redhat.com <pchavva(a)redhat.com>, Abhay Marode <
Abhay.Marode(a)veritas.com>, Suchitra Herwadkar <
Suchitra.Herwadkar(a)veritas.com>, Mahesh Falmari <Mahesh.Falmari(a)veritas.com>,
Sudhakar Paulzagade <Sudhakar.Paulzagade(a)veritas.com>, Navin Tah <
Navin.Tah(a)veritas.com>
Cc: ydary(a)redhat.com <ylavi(a)redhat.com>, adbarbos(a)redhat.com <
adbarbos(a)redhat.com>
Hi Pavan,
I have a question regarding imagetransfer finalize call.
After imagetransfer upload, when I call the finalize the transfer, I am
getting Finalize Failure error.
I am following these steps to upload a disk via rest API.
1. Create disk on a storage domain (POST https://
<ovirt-server>/ovirt-engine/api/disks)
2. Initiate imagetransfer and get the proxy_url and signed_ticket
(https:// <ovirt-server> /ovirt-engine/api/imagetransfers)
3. Upload data using curl (to proxy URL)
1. Finalize the transfer:
After that disk is getting deleted automatically.
Sequence of events in events tab:
Is it expected behaviour on finalizing the imagetransfer failure?
If so, how to troubleshoot and get the reason of finalizing failure?
Thanks,
Ketan Pachpande
6 years, 3 months
[ OST Failure Report ] [ oVirt 4.2 (ovirt-engine) ] [ 07-08-2018 ] [ 004_basic_sanity.update_template_version ]
by Dafna Ron
Hi,
We are failing ovirt 4.2 on project ovirt-engine on test
004_basic_sanity.update_template_version.
I believe the reported patch from CQ may have indeed caused the issue.
Eli, can you please check this issue?
*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/93501/ <https://gerrit.ovirt.org/#/c/93501/> -
*
*core:make search string fields not null and emptyLink to
Job:https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800
<https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800>Link to
all
logs:https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800/art...
<https://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/2800/artifact...>(Relevant)
error snippet from the log: <error><JsonRpcRequest id:
"0aec9ef2-5e1f-4cb6-bf75-f5826d2ae135", method: Volume.getInfo, params:
{storagepoolID=fe6f6819-4791-4624-aa56-c82e49b0eaf3,
storagedomainID=2a3af2d0-c00e-4ac7-a162-fa08d33c173f,
imageID=2d2f61d4-2347-4200-9c1f-0ee376104ef0,
volumeID=21ea717f-e3a1-4c36-8101-ba746bd78c40}>2018-08-07 06:12:46,522-04
INFO [org.ovirt.engine.core.bll.AddVmTemplateCommand] (default task-4)
[6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] Running command:
AddVmTemplateCommand internal: false. Entities affected : ID:
fe6f6819-4791-4624-aa56-c82e49b0eaf3 Type: StoragePoolAction group
CREATE_TEMPLATE with role type USER2018-08-07 06:12:46,525-04 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-4)
[6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameters:{vmId='64293490-e128-48b7-9e23-0491b48d9a1f',
status='ImageLocked', exitStatus='Normal'}), log id: eca02a92018-08-07
06:12:46,527-04 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-4)
[6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] FINISH, SetVmStatusVDSCommand, log
id: eca02a92018-08-07 06:12:46,527-04 DEBUG
[org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor]
(default task-4) [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] method:
runVdsCommand, params: [SetVmStatus,
SetVmStatusVDSCommandParameters:{vmId='64293490-e128-48b7-9e23-0491b48d9a1f',
status='ImageLocked', exitStatus='Normal'}], timeElapsed: 3ms2018-08-07
06:12:46,537-04 DEBUG
[org.ovirt.engine.core.dal.dbbroker.CustomSQLErrorCodeSQLExceptionTranslator]
(default task-4) [6db41b0d-0d11-4b75-94f9-4a478e6fb3dc] Translating
SQLException with SQL state '23502', error code '0', message [ERROR: null
value in column "description" violates not-null constraint Detail: Failing
row contains (ce532690-2131-49a8-b2a0-183936727092,
CirrOS_0.4.0_for_x86_64_glance_template, 512,
5b1f874d-dc92-43ed-86ef-a9c2d6bfc9a3, 0, null,
fe7292a4-c998-4f6c-897c-fa7525911a16, 2018-08-07 06:12:46.529-04, 1, null,
f, 1, 1, 1, Etc/GMT, t, f, 2018-08-07 06:12:46.530974-04, null, null, f, 1,
0, 0, 1, 0, , 3, null, null, null, 0, , , 256, TEMPLATE, 0, 1,
31a8e1fa-1fad-456c-8b8a-aa11551cae9d, f, null, f, f, 1, f, f, f,
d0d66980-9a26-11e8-b2f3-5452c0a8c802, null, , f, 0, null, null, null,
guest_agent, null, null, null, 2, null, 2, 12345678, f, interleave, t, t,
deb3f53e-13c5-4aea-bf90-0339eba39fed, null, null, null, null,
22716173-2816-a109-1d2f-c44d945e92dd, 3b3f239c-d8bb-423f-a39c-e2b905473b83,
null, 1, LOCK_SCREEN, 2, null, null, 2048, null, AUTO_RESUME, t). Where:
SQL statement "INSERT INTO vm_static( child_count,
creation_date, description, free_text_comment,
mem_size_mb, max_memory_size_mb, num_of_io_threads,
vm_name, num_of_sockets, cpu_per_socket,
threads_per_cpu, os, vm_guid, cluster_id,
num_of_monitors, single_qxl_pci,
allow_console_reconnect, template_status, usb_policy,
time_zone, fail_back, vm_type, nice_level,
cpu_shares, default_boot_sequence, default_display_type,*
*</error>*
6 years, 3 months
[VDSM] Test pass on Travis! (on Fedora 28)
by Nir Soffer
Tests pass on Fedora 28:
https://travis-ci.org/nirs/vdsm/jobs/413702715
Fedora rawhide fails with various issues, will open another
thread about it.
Still failing on CentOS in the ovs tests:
https://travis-ci.org/nirs/vdsm/jobs/413702714
(see errors bellow).
So maybe we need to mark tests as xfail only on CentOS - or
maybe the fact it works on Fedora means we don't understand
the failure yet?
We don't know what is the vm running our tests, but we control
the CentOS/Fedora container. Maybe something is missing in the
CentOS container?
https://github.com/oVirt/vdsm/blob/master/docker/Dockerfile.centos
Nir
---
________ ERROR at setup of TestOvsApiBase.test_execute_a_single_command ________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
------------------------------ Captured log setup ------------------------------
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py 159 DEBUG FAILED: <err> = ''; <rc> = 1
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl --system-id=random start (cwd
None)
cmdutils.py 159 DEBUG FAILED: <err> = 'rmmod: ERROR:
Module bridge is in use by: br_netfilter\n'; <rc> = 1
cmdutils.py 151 DEBUG
/usr/share/openvswitch/scripts/ovs-ctl status (cwd None)
cmdutils.py 159 DEBUG FAILED: <err> = ''; <rc> = 1
_________ ERROR at setup of TestOvsApiBase.test_execute_a_transaction __________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
____ ERROR at setup of TestOvsApiBase.test_instantiate_vsctl_implementation ____
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
___ ERROR at setup of TestOvsApiWithSingleRealBridge.test_add_slave_to_bond ____
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
___ ERROR at setup of TestOvsApiWithSingleRealBridge.test_create_remove_bond ___
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
ERROR at setup of
TestOvsApiWithSingleRealBridge.test_create_vlan_as_fake_bridge
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
_ ERROR at setup of TestOvsApiWithSingleRealBridge.test_remove_slave_from_bond _
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
___________ ERROR at setup of TestOvsInfo.test_ovs_info_with_sb_bond ___________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
___________ ERROR at setup of TestOvsInfo.test_ovs_info_with_sb_nic ____________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
_____________ ERROR at setup of SetupTransactionTests.test_dry_run _____________
@pytest.fixture(scope='session', autouse=True)
def ovs_service():
service = OvsService()
> service.setup()
network/integration/ovs/conftest.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <network.ovsnettestlib.OvsService object at 0x7f15a29655d0>
def setup(self):
if not self.ovs_init_state_is_up:
cmd.exec_sync([OVS_CTL, '--system-id=random', 'start'])
> assert self.is_service_running()
E AssertionError
network/ovsnettestlib.py:39: AssertionError
=============== 40 passed, 32 skipped, 10 error in 4.81 seconds ================
6 years, 3 months
[ OST Failure Report ] [ oVirt master (ovirt-engine) ] [ 6-08-2018 ] [ TEST NAME ]
by Dafna Ron
Hi,
We have a failure in CQ on ovirt-master for test
001_upgrade_engine.test_initialize_engine on upgrade suite.
*Link and headline of suspected patches:
https://gerrit.ovirt.org/#/c/93466/ <https://gerrit.ovirt.org/#/c/93466/> -
core: clean stale image_transfer on upgradeLink to
Job:https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8975
<https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8975>Link
to all
logs:https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8975/...
<https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/8975/artif...>(Relevant)
error snippet from the log: <error>*
[ INFO ] Stage: Misc configuration
[ INFO ] Upgrading CA
[ INFO ] Backing up database localhost:ovirt_engine_history to
'/var/lib/ovirt-engine-dwh/backups/dwh-20180806074534.L73Aa6.dump'.
[ INFO ] Creating/refreshing DWH database schema
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Backing up database localhost:engine to
'/var/lib/ovirt-engine/backups/engine-20180806074537.V9dDgr.dump'.
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0270_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
[ INFO ] Clearing Engine database engine
[ INFO ] Restoring Engine database engine
[ INFO ] Restoring file
'/var/lib/ovirt-engine/backups/engine-20180806074537.V9dDgr.dump' to
database localhost:engine.
[ ERROR ] Errors while restoring engine database, please check the log
file for details
[ INFO ] Stage: Clean up
Log file is located at
/var/log/ovirt-engine/setup/ovirt-engine-setup-20180806074515-jb06pl.log
[ INFO ] Generating answer file
'/var/lib/ovirt-engine/setup/answers/20180806074550-setup.conf'
[WARNING] Rollback of DWH database started
This might be a long process, but it should be safe to start
the engine service before it finishes, if needed.
[ INFO ] Clearing DWH database ovirt_engine_history
[ INFO ] Restoring DWH database ovirt_engine_history
[ INFO ] Restoring file
'/var/lib/ovirt-engine-dwh/backups/dwh-20180806074534.L73Aa6.dump' to
database localhost:ovirt_engine_history.
[ ERROR ] Errors while restoring ovirt_engine_history database, please
check the log file for details
[ INFO ] Stage: Pre-termination
*</error>*
6 years, 3 months
iSCSI and targetcli
by Hetz Ben Hamo
When trying to configure targetcli as target, the ACL requires the
initiators machines IQN in order to allow them access.
Today, you'll need to login to each node, get the IQN and add it to the ACL.
Is it possible to add the IQN of the nodes in the HE WEB UI?
6 years, 3 months