[oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly -
Build # 405 - Failure!
by jenkins@jenkins.phx.ovirt.org
Project: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
Build: https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_night...
Build Number: 405
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #405
[Marcin Sobczyk] basic: Remove remaining lago deps from '002_bootstrap_pytest'
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: basic-suite-master.test-scenarios.004_basic_sanity_pytest.test_live_storage_migration
Error Message:
AssertionError: False != True after 600 seconds
Stack Trace:
api_v4 = <ovirtsdk4.Connection object at 0x7f45b89370d0>
@order_by(_TEST_LIST)
def test_live_storage_migration(api_v4):
engine = api_v4.system_service()
disk_service = test_utils.get_disk_service(engine, DISK0_NAME)
correlation_id = 'live_storage_migration'
disk_service.move(
async=False,
filter=False,
storage_domain=types.StorageDomain(
name=SD_ISCSI_NAME
),
query={'correlation_id': correlation_id}
)
testlib.assert_true_within_long(lambda: test_utils.all_jobs_finished(engine, correlation_id))
# Assert that the disk is on the correct storage domain,
# its status is OK and the snapshot created for the migration
# has been merged
testlib.assert_true_within_long(
lambda: api_v4.follow_link(disk_service.get().storage_domains[0]).name == SD_ISCSI_NAME
)
vm0_snapshots_service = test_utils.get_vm_snapshots_service(engine, VM0_NAME)
testlib.assert_true_within_long(
> lambda: len(vm0_snapshots_service.list()) == 1
)
../basic-suite-master/test-scenarios/004_basic_sanity_pytest.py:585:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/lib/python2.7/site-packages/ovirtlago/testlib.py:286: in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
/usr/lib/python2.7/site-packages/ovirtlago/testlib.py:273: in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
func = <function <lambda> at 0x7f45b86ed050>, value = True, timeout = 600
allowed_exceptions = [], initial_wait = 0
def assert_equals_within(
func, value, timeout, allowed_exceptions=None, initial_wait=10
):
allowed_exceptions = allowed_exceptions or []
with utils.EggTimer(timeout) as timer:
while not timer.elapsed():
try:
res = func()
if res == value:
return
except Exception as exc:
if _instance_of_any(exc, allowed_exceptions):
time.sleep(3)
continue
LOGGER.exception("Unhandled exception in %s", func)
raise
if initial_wait == 0:
time.sleep(3)
else:
time.sleep(initial_wait)
initial_wait = 0
try:
raise AssertionError(
> '%s != %s after %s seconds' % (res, value, timeout)
)
E AssertionError: False != True after 600 seconds
/usr/lib/python2.7/site-packages/ovirtlago/testlib.py:252: AssertionError
4 years, 3 months
[oVirt Jenkins] ovirt-system-tests_performance-suite-master - Build
# 1584 - Failure!
by jenkins@jenkins.phx.ovirt.org
Project: https://jenkins.ovirt.org/job/ovirt-system-tests_performance-suite-master/
Build: https://jenkins.ovirt.org/job/ovirt-system-tests_performance-suite-master...
Build Number: 1584
Build Status: Failure
Triggered By: Started by timer
-------------------------------------
Changes Since Last Success:
-------------------------------------
Changes for Build #1584
[Sandro Bonazzola] yum-repos: cbs.centos.org repos doesn't exist anymore
[Gal Ben Haim] dockerfile-utils: Specify authfile
[Gal Ben Haim] dockerfile-utils: Skopeo change args order
-----------------
Failed Tests:
-----------------
1 tests failed.
FAILED: 040_add_hosts_vms.configure_storage
Error Message:
setup_storage.sh failed. Exit code is 1
-------------------- >> begin captured logging << --------------------
lago.ssh: DEBUG: start task:2dcd97e8-880d-4e8e-8983-cfdcd528c9be:Get ssh client for lago-performance-suite-master-engine:
lago.ssh: DEBUG: end task:2dcd97e8-880d-4e8e-8983-cfdcd528c9be:Get ssh client for lago-performance-suite-master-engine:
lago.ssh: DEBUG: Running 2fa1b3a8 on lago-performance-suite-master-engine: /tmp/setup_storage.sh
lago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine returned with 1
lago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine output:
success
success
success
success
success
Creating filesystem with 26476544 4k blocks and 6619136 inodes
Filesystem UUID: 78b0cc0a-40b4-448e-9bbd-976f3545850c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872
Allocating group tables: 0/808??????? ???????done
Writing inode tables: 0/808??????? ???????done
Creating journal (131072 blocks): done
Writing superblocks and filesystem accounting information: 0/808??????? ???????done
Relabeled /exports/nfs from system_u:object_r:unlabeled_t:s0 to system_u:object_r:nfs_t:s0
Relabeled /exports/nfs/lost+found from system_u:object_r:unlabeled_t:s0 to system_u:object_r:nfs_t:s0
Relabeled /exports/nfs/share1 from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0
Relabeled /exports/nfs/share2 from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0
Relabeled /exports/nfs/iso from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0
Relabeled /exports/nfs/exported from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0
Physical volume "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3" successfully created.
Volume group "vg1_storage" successfully created
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
Created target iqn.2014-07.org.ovirt:storage.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
Deleted network portal 0.0.0.0:3260
Using default IP port 3260
Created network portal 192.168.200.4:3260.
Logical volume "lun0_bdev" created.
Created block storage object lun0_bdev using /dev/vg1_storage/lun0_bdev.
Parameter emulate_tpu is now '1'.
Created LUN 0.
Logical volume "lun1_bdev" created.
Created block storage object lun1_bdev using /dev/vg1_storage/lun1_bdev.
Parameter emulate_tpu is now '1'.
Created LUN 1.
Logical volume "lun2_bdev" created.
Created block storage object lun2_bdev using /dev/vg1_storage/lun2_bdev.
Parameter emulate_tpu is now '1'.
Created LUN 2.
Logical volume "lun3_bdev" created.
Created block storage object lun3_bdev using /dev/vg1_storage/lun3_bdev.
Parameter emulate_tpu is now '1'.
Created LUN 3.
Logical volume "lun4_bdev" created.
Created block storage object lun4_bdev using /dev/vg1_storage/lun4_bdev.
Parameter emulate_tpu is now '1'.
Created LUN 4.
Parameter userid is now 'username'.
Parameter password is now 'password'.
Parameter demo_mode_write_protect is now '0'.
Parameter generate_node_acls is now '1'.
Parameter cache_dynamic_acls is now '1'.
Parameter default_cmdsn_depth is now '64'.
Configuration saved to /etc/target/saveconfig.json
192.168.200.4:3260,1 iqn.2014-07.org.ovirt:storage
Logging in to [iface: default, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260]
Login to [iface: default, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260] successful.
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 0 0 0 0 ...
Scanning for device 0 0 0 0 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 0 0 0 2 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 02
Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+
Type: Direct-Access ANSI SCSI revision: 05
Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 3 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 3 0 0 0 ...
Scanning for device 3 0 0 0 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: LIO-ORG Model: lun0_bdev Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 0 1 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 01
Vendor: LIO-ORG Model: lun1_bdev Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 0 2 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 02
Vendor: LIO-ORG Model: lun2_bdev Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 0 3 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 03
Vendor: LIO-ORG Model: lun3_bdev Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 0 4 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 04
Vendor: LIO-ORG Model: lun4_bdev Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
0 new or changed device(s) found.
0 remapped or resized device(s) found.
0 device(s) removed.
Logging out of session [sid: 1, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260]
Logout of [sid: 1, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260] successful.
oVirt Master Latest Tested 3.6 MB/s | 1.1 MB 00:00
base 35 MB/s | 2.2 MB 00:00
appstream 22 MB/s | 5.7 MB 00:00
powertools 17 MB/s | 1.9 MB 00:00
opstools 5.8 MB/s | 123 kB 00:00
Extra Packages for Enterprise Linux 8 - x86_64 59 MB/s | 7.0 MB 00:00
GlusterFS 7 x86_64 302 kB/s | 22 kB 00:00
virtio-win builds roughly matching what will be 131 kB/s | 7.8 kB 00:00
Copr repo for EL8_collection owned by sbonazzo 23 MB/s | 447 kB 00:00
Copr repo for gluster-ansible owned by sac 579 kB/s | 6.1 kB 00:00
Copr repo for ovsdbapp owned by mdbarroso 200 kB/s | 1.8 kB 00:00
Copr repo for nmstate-0.2 owned by nmstate 343 kB/s | 2.9 kB 00:00
Copr repo for NetworkManager-1.22 owned by netw 2.2 MB/s | 18 kB 00:00
CentOS-8 Advanced virtualization 6.0 MB/s | 72 kB 00:00
CentOS-8 - oVirt 4.4 3.5 MB/s | 509 kB 00:00
Dependencies resolved.
Nothing to do.
Complete!
Last metadata expiration check: 0:00:04 ago on Thu 17 Sep 2020 11:01:41 PM EDT.
No match for argument: 389-ds-base-legacy-tools
lago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine errors:
+ set -xe
+ MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
+ ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
+ NUM_LUNS=5
++ awk -F. '{print $(NF-1)}'
++ uname -r
+ DIST=el8_2
+ main
++ hostname
+ [[ lago-performance-suite-master-engine == *\i\p\v\6* ]]
+ install_deps
+ systemctl disable --now kdump.service
Removed /etc/systemd/system/multi-user.target.wants/kdump.service.
+ pkgs_to_install=("nfs-utils" "rpcbind" "lvm2" "targetcli" "sg3_utils" "iscsi-initiator-utils" "policycoreutils-python-utils")
+ rpm -q nfs-utils rpcbind lvm2 targetcli sg3_utils iscsi-initiator-utils policycoreutils-python-utils
+ setup_services
+ install_firewalld
+ [[ el8_2 == \e\l\7 ]]
+ configure_firewalld
+ rpm -q firewalld
+ systemctl status firewalld
+ firewall-cmd --permanent --add-service=iscsi-target
+ firewall-cmd --permanent --add-service=ldap
+ firewall-cmd --permanent --add-service=nfs
+ firewall-cmd --permanent --add-service=ntp
Warning: ALREADY_ENABLED: ntp
+ firewall-cmd --reload
+ sed -i '/\[mountd\]/aport=892' /etc/nfs.conf
+ sed -i '/\[statd\]/aport=662' /etc/nfs.conf
+ sed -i '/\[lockd\]/aport=32803\nudp-port=32769' /etc/nfs.conf
+ systemctl restart rpcbind.service
+ systemctl enable --now rpcbind.service
+ systemctl enable --now nfs-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
+ systemctl start nfs-idmapd.service
+ setup_main_nfs
+ setup_device disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2 /exports/nfs
+ local device=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
+ local mountpath=/exports/nfs
+ mkdir -p /exports/nfs
+ mkfs.ext4 -E nodiscard -F /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2
mke2fs 1.45.4 (23-Sep-2019)
+ echo -e '/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2\t/exports/nfs\text4\tdefaults,discard\t0 0'
+ mount /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2 /exports/nfs
+ setup_nfs /exports/nfs/share1
+ local exportpath=/exports/nfs/share1
+ mkdir -p /exports/nfs/share1
+ chmod a+rwx /exports/nfs/share1
+ echo '/exports/nfs/share1 *(rw,sync,anonuid=36,anongid=36,all_squash)'
+ setup_export
+ setup_nfs /exports/nfs/exported
+ local exportpath=/exports/nfs/exported
+ mkdir -p /exports/nfs/exported
+ chmod a+rwx /exports/nfs/exported
+ echo '/exports/nfs/exported *(rw,sync,anonuid=36,anongid=36,all_squash)'
+ setup_iso
+ setup_nfs /exports/nfs/iso
+ local exportpath=/exports/nfs/iso
+ mkdir -p /exports/nfs/iso
+ chmod a+rwx /exports/nfs/iso
+ echo '/exports/nfs/iso *(rw,sync,anonuid=36,anongid=36,all_squash)'
+ setup_second_nfs
+ setup_nfs /exports/nfs/share2
+ local exportpath=/exports/nfs/share2
+ mkdir -p /exports/nfs/share2
+ chmod a+rwx /exports/nfs/share2
+ echo '/exports/nfs/share2 *(rw,sync,anonuid=36,anongid=36,all_squash)'
+ set_selinux_on_nfs
+ semanage fcontext -a -t nfs_t '/exports/nfs(/.*)?'
+ restorecon -Rv /exports/nfs
+ activate_nfs
+ exportfs -a
+ setup_lvm_filter
+ cat
+ setup_iscsi
++ hostname
+ [[ lago-performance-suite-master-engine == *\-\s\t\o\r\a\g\e ]]
++ grep -oP '(?<=inet ).*(?=/)'
++ ip -4 addr show eth1
+ IP=192.168.200.4
+ pvcreate --zero n /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
+ vgcreate --zero n vg1_storage /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3
+ targetcli /iscsi create iqn.2014-07.org.ovirt:storage
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/portals delete 0.0.0.0 3260
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/portals create ip_address=192.168.200.4 ip_port=3260
++ seq 5
+ for I in $(seq $NUM_LUNS)
+ create_lun 0
+ local ID=0
+ lvcreate --zero n -L20G -n lun0_bdev vg1_storage
WARNING: Logical volume vg1_storage/lun0_bdev not zeroed.
+ targetcli /backstores/block create name=lun0_bdev dev=/dev/vg1_storage/lun0_bdev
+ targetcli /backstores/block/lun0_bdev set attribute emulate_tpu=1
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun0_bdev
+ for I in $(seq $NUM_LUNS)
+ create_lun 1
+ local ID=1
+ lvcreate --zero n -L20G -n lun1_bdev vg1_storage
WARNING: Logical volume vg1_storage/lun1_bdev not zeroed.
+ targetcli /backstores/block create name=lun1_bdev dev=/dev/vg1_storage/lun1_bdev
+ targetcli /backstores/block/lun1_bdev set attribute emulate_tpu=1
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun1_bdev
+ for I in $(seq $NUM_LUNS)
+ create_lun 2
+ local ID=2
+ lvcreate --zero n -L20G -n lun2_bdev vg1_storage
WARNING: Logical volume vg1_storage/lun2_bdev not zeroed.
+ targetcli /backstores/block create name=lun2_bdev dev=/dev/vg1_storage/lun2_bdev
+ targetcli /backstores/block/lun2_bdev set attribute emulate_tpu=1
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun2_bdev
+ for I in $(seq $NUM_LUNS)
+ create_lun 3
+ local ID=3
+ lvcreate --zero n -L20G -n lun3_bdev vg1_storage
WARNING: Logical volume vg1_storage/lun3_bdev not zeroed.
+ targetcli /backstores/block create name=lun3_bdev dev=/dev/vg1_storage/lun3_bdev
+ targetcli /backstores/block/lun3_bdev set attribute emulate_tpu=1
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun3_bdev
+ for I in $(seq $NUM_LUNS)
+ create_lun 4
+ local ID=4
+ lvcreate --zero n -L20G -n lun4_bdev vg1_storage
WARNING: Logical volume vg1_storage/lun4_bdev not zeroed.
+ targetcli /backstores/block create name=lun4_bdev dev=/dev/vg1_storage/lun4_bdev
+ targetcli /backstores/block/lun4_bdev set attribute emulate_tpu=1
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun4_bdev
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set auth userid=username password=password
+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set attribute demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 default_cmdsn_depth=64
+ targetcli saveconfig
+ systemctl enable --now target
Created symlink /etc/systemd/system/multi-user.target.wants/target.service → /usr/lib/systemd/system/target.service.
+ sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.username = username/node.session.auth.username = username/g' /etc/iscsi/iscsid.conf
+ sed -i 's/#node.session.auth.password = password/node.session.auth.password = password/g' /etc/iscsi/iscsid.conf
+ sed -i 's/node.conn\[0\].timeo.noop_out_timeout = 5/node.conn\[0\].timeo.noop_out_timeout = 30/g' /etc/iscsi/iscsid.conf
+ iscsiadm -m discovery -t sendtargets -p 192.168.200.4
+ iscsiadm -m node -L all
+ rescan-scsi-bus.sh
which: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)
+ ls /dev/disk/by-id/scsi-360014051b18182623d649bf8a1677508 /dev/disk/by-id/scsi-3600140560dddf47a1c84d65aa1bf2da7 /dev/disk/by-id/scsi-360014057c57a5453bee43088583d8085 /dev/disk/by-id/scsi-36001405c13ab636c31448d393d93d81f /dev/disk/by-id/scsi-36001405f7a38ceb1fce49aeadd415586
+ cut -d - -f 3
+ sort
+ iscsiadm -m node -U all
+ iscsiadm -m node -o delete
+ systemctl disable --now iscsi.service
Removed /etc/systemd/system/remote-fs.target.wants/iscsi.service.
+ install_deps_389ds
+ rpm -q 389-ds-base 389-ds-base-legacy-tools
+ dnf module -y enable 389-ds
+ yum install --nogpgcheck -y 389-ds-base 389-ds-base-legacy-tools
Error: Unable to find a match: 389-ds-base-legacy-tools
--------------------- >> end captured logging << ---------------------
Stack Trace:
File "/usr/lib64/python2.7/unittest/case.py", line 369, in run
testMethod()
File "/usr/lib/python2.7/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 142, in wrapped_test
test()
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line 60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/home/jenkins/agent/workspace/ovirt-system-tests_performance-suite-master/ovirt-system-tests/performance-suite-master/test-scenarios/040_add_hosts_vms.py", line 745, in configure_storage
result.code, 0, 'setup_storage.sh failed. Exit code is %s' % result.code
File "/usr/lib/python2.7/site-packages/nose/tools/trivial.py", line 29, in eq_
raise AssertionError(msg or "%r != %r" % (a, b))
'setup_storage.sh failed. Exit code is 1\n-------------------- >> begin captured logging << --------------------\nlago.ssh: DEBUG: start task:2dcd97e8-880d-4e8e-8983-cfdcd528c9be:Get ssh client for lago-performance-suite-master-engine:\nlago.ssh: DEBUG: end task:2dcd97e8-880d-4e8e-8983-cfdcd528c9be:Get ssh client for lago-performance-suite-master-engine:\nlago.ssh: DEBUG: Running 2fa1b3a8 on lago-performance-suite-master-engine: /tmp/setup_storage.sh\nlago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine returned with 1\nlago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine output:\n success\nsuccess\nsuccess\nsuccess\nsuccess\nCreating filesystem with 26476544 4k blocks and 6619136 inodes\nFilesystem UUID: 78b0cc0a-40b4-448e-9bbd-976f3545850c\nSuperblock backups stored on blocks: \n\t32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, \n\t4096000, 7962624, 11239424, 20480000, 23887872\n\nAllocating group tables: 0/808\x08\x08\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08\x08\x08done \nWriting inode tables: 0/808\x08\x08\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08\x08\x08done \nCreating journal (131072 blocks): done\nWriting superblocks and filesystem accounting information: 0/808\x08\x08\x08\x08\x08\x08\x08 \x08\x08\x08\x08\x08\x08\x08done\n\nRelabeled /exports/nfs from system_u:object_r:unlabeled_t:s0 to system_u:object_r:nfs_t:s0\nRelabeled /exports/nfs/lost+found from system_u:object_r:unlabeled_t:s0 to system_u:object_r:nfs_t:s0\nRelabeled /exports/nfs/share1 from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0\nRelabeled /exports/nfs/share2 from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0\nRelabeled /exports/nfs/iso from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0\nRelabeled /exports/nfs/exported from unconfined_u:object_r:unlabeled_t:s0 to unconfined_u:object_r:nfs_t:s0\n Physical volume "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3" successfully created.\n Volume group "vg1_storage" successfully created\nWarning: Could not load preferences file /root/.targetcli/prefs.bin.\nCreated target iqn.2014-07.org.ovirt:storage.\nCreated TPG 1.\nGlobal pref auto_add_default_portal=true\nCreated default portal listening on all IPs (0.0.0.0), port 3260.\nDeleted network portal 0.0.0.0:3260\nUsing default IP port 3260\nCreated network portal 192.168.200.4:3260.\n Logical volume "lun0_bdev" created.\nCreated block storage object lun0_bdev using /dev/vg1_storage/lun0_bdev.\nParameter emulate_tpu is now \'1\'.\nCreated LUN 0.\n Logical volume "lun1_bdev" created.\nCreated block storage object lun1_bdev using /dev/vg1_storage/lun1_bdev.\nParameter emulate_tpu is now \'1\'.\nCreated LUN 1.\n Logical volume "lun2_bdev" created.\nCreated block storage object lun2_bdev using /dev/vg1_storage/lun2_bdev.\nParameter emulate_tpu is now \'1\'.\nCreated LUN 2.\n Logical volume "lun3_bdev" created.\nCreated block storage object lun3_bdev using /dev/vg1_storage/lun3_bdev.\nParameter emulate_tpu is now \'1\'.\nCreated LUN 3.\n Logical volume "lun4_bdev" created.\nCreated block storage object lun4_bdev using /dev/vg1_storage/lun4_bdev.\nParameter emulate_tpu is now \'1\'.\nCreated LUN 4.\nParameter userid is now \'username\'.\nParameter password is now \'password\'.\nParameter demo_mode_write_protect is now \'0\'.\nParameter generate_node_acls is now \'1\'.\nParameter cache_dynamic_acls is now \'1\'.\nParameter default_cmdsn_depth is now \'64\'.\nConfiguration saved to /etc/target/saveconfig.json\n192.168.200.4:3260,1 iqn.2014-07.org.ovirt:storage\nLogging in to [iface: default, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260]\nLogin to [iface: default, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260] successful.\nScanning SCSI subsystem for new devices\nScanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs\n Scanning for device 0 0 0 0 ...\r Scanning for device 0 0 0 0 ... \nOLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00\n Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+\n Type: Direct-Access ANSI SCSI revision: 05\n Scanning for device 0 0 0 2 ... \nOLD: Host: scsi0 Channel: 00 Id: 00 Lun: 02\n Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+\n Type: Direct-Access ANSI SCSI revision: 05\nScanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs\nScanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs\nScanning host 3 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs\n Scanning for device 3 0 0 0 ...\r Scanning for device 3 0 0 0 ... \nOLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00\n Vendor: LIO-ORG Model: lun0_bdev Rev: 4.0 \n Type: Direct-Access ANSI SCSI revision: 05\n Scanning for device 3 0 0 1 ... \nOLD: Host: scsi3 Channel: 00 Id: 00 Lun: 01\n Vendor: LIO-ORG Model: lun1_bdev Rev: 4.0 \n Type: Direct-Access ANSI SCSI revision: 05\n Scanning for device 3 0 0 2 ... \nOLD: Host: scsi3 Channel: 00 Id: 00 Lun: 02\n Vendor: LIO-ORG Model: lun2_bdev Rev: 4.0 \n Type: Direct-Access ANSI SCSI revision: 05\n Scanning for device 3 0 0 3 ... \nOLD: Host: scsi3 Channel: 00 Id: 00 Lun: 03\n Vendor: LIO-ORG Model: lun3_bdev Rev: 4.0 \n Type: Direct-Access ANSI SCSI revision: 05\n Scanning for device 3 0 0 4 ... \nOLD: Host: scsi3 Channel: 00 Id: 00 Lun: 04\n Vendor: LIO-ORG Model: lun4_bdev Rev: 4.0 \n Type: Direct-Access ANSI SCSI revision: 05\n0 new or changed device(s) found. \n0 remapped or resized device(s) found.\n0 device(s) removed. \nLogging out of session [sid: 1, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260]\nLogout of [sid: 1, target: iqn.2014-07.org.ovirt:storage, portal: 192.168.200.4,3260] successful.\noVirt Master Latest Tested 3.6 MB/s | 1.1 MB 00:00 \nbase 35 MB/s | 2.2 MB 00:00 \nappstream 22 MB/s | 5.7 MB 00:00 \npowertools 17 MB/s | 1.9 MB 00:00 \nopstools 5.8 MB/s | 123 kB 00:00 \nExtra Packages for Enterprise Linux 8 - x86_64 59 MB/s | 7.0 MB 00:00 \nGlusterFS 7 x86_64 302 kB/s | 22 kB 00:00 \nvirtio-win builds roughly matching what will be 131 kB/s | 7.8 kB 00:00 \nCopr repo for EL8_collection owned by sbonazzo 23 MB/s | 447 kB 00:00 \nCopr repo for gluster-ansible owned by sac 579 kB/s | 6.1 kB 00:00 \nCopr repo for ovsdbapp owned by mdbarroso 200 kB/s | 1.8 kB 00:00 \nCopr repo for nmstate-0.2 owned by nmstate 343 kB/s | 2.9 kB 00:00 \nCopr repo for NetworkManager-1.22 owned by netw 2.2 MB/s | 18 kB 00:00 \nCentOS-8 Advanced virtualization 6.0 MB/s | 72 kB 00:00 \nCentOS-8 - oVirt 4.4 3.5 MB/s | 509 kB 00:00 \nDependencies resolved.\nNothing to do.\nComplete!\nLast metadata expiration check: 0:00:04 ago on Thu 17 Sep 2020 11:01:41 PM EDT.\nNo match for argument: 389-ds-base-legacy-tools\n\nlago.ssh: DEBUG: Command 2fa1b3a8 on lago-performance-suite-master-engine errors:\n + set -xe\n+ MAIN_NFS_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2\n+ ISCSI_DEV=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3\n+ NUM_LUNS=5\n++ awk -F. \'{print $(NF-1)}\'\n++ uname -r\n+ DIST=el8_2\n+ main\n++ hostname\n+ [[ lago-performance-suite-master-engine == *\\i\\p\\v\\6* ]]\n+ install_deps\n+ systemctl disable --now kdump.service\nRemoved /etc/systemd/system/multi-user.target.wants/kdump.service.\n+ pkgs_to_install=("nfs-utils" "rpcbind" "lvm2" "targetcli" "sg3_utils" "iscsi-initiator-utils" "policycoreutils-python-utils")\n+ rpm -q nfs-utils rpcbind lvm2 targetcli sg3_utils iscsi-initiator-utils policycoreutils-python-utils\n+ setup_services\n+ install_firewalld\n+ [[ el8_2 == \\e\\l\\7 ]]\n+ configure_firewalld\n+ rpm -q firewalld\n+ systemctl status firewalld\n+ firewall-cmd --permanent --add-service=iscsi-target\n+ firewall-cmd --permanent --add-service=ldap\n+ firewall-cmd --permanent --add-service=nfs\n+ firewall-cmd --permanent --add-service=ntp\nWarning: ALREADY_ENABLED: ntp\n+ firewall-cmd --reload\n+ sed -i \'/\\[mountd\\]/aport=892\' /etc/nfs.conf\n+ sed -i \'/\\[statd\\]/aport=662\' /etc/nfs.conf\n+ sed -i \'/\\[lockd\\]/aport=32803\\nudp-port=32769\' /etc/nfs.conf\n+ systemctl restart rpcbind.service\n+ systemctl enable --now rpcbind.service\n+ systemctl enable --now nfs-server.service\nCreated symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service \xe2\x86\x92 /usr/lib/systemd/system/nfs-server.service.\n+ systemctl start nfs-idmapd.service\n+ setup_main_nfs\n+ setup_device disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2 /exports/nfs\n+ local device=disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2\n+ local mountpath=/exports/nfs\n+ mkdir -p /exports/nfs\n+ mkfs.ext4 -E nodiscard -F /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2\nmke2fs 1.45.4 (23-Sep-2019)\n+ echo -e \'/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2\\t/exports/nfs\\text4\\tdefaults,discard\\t0 0\'\n+ mount /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2 /exports/nfs\n+ setup_nfs /exports/nfs/share1\n+ local exportpath=/exports/nfs/share1\n+ mkdir -p /exports/nfs/share1\n+ chmod a+rwx /exports/nfs/share1\n+ echo \'/exports/nfs/share1 *(rw,sync,anonuid=36,anongid=36,all_squash)\'\n+ setup_export\n+ setup_nfs /exports/nfs/exported\n+ local exportpath=/exports/nfs/exported\n+ mkdir -p /exports/nfs/exported\n+ chmod a+rwx /exports/nfs/exported\n+ echo \'/exports/nfs/exported *(rw,sync,anonuid=36,anongid=36,all_squash)\'\n+ setup_iso\n+ setup_nfs /exports/nfs/iso\n+ local exportpath=/exports/nfs/iso\n+ mkdir -p /exports/nfs/iso\n+ chmod a+rwx /exports/nfs/iso\n+ echo \'/exports/nfs/iso *(rw,sync,anonuid=36,anongid=36,all_squash)\'\n+ setup_second_nfs\n+ setup_nfs /exports/nfs/share2\n+ local exportpath=/exports/nfs/share2\n+ mkdir -p /exports/nfs/share2\n+ chmod a+rwx /exports/nfs/share2\n+ echo \'/exports/nfs/share2 *(rw,sync,anonuid=36,anongid=36,all_squash)\'\n+ set_selinux_on_nfs\n+ semanage fcontext -a -t nfs_t \'/exports/nfs(/.*)?\'\n+ restorecon -Rv /exports/nfs\n+ activate_nfs\n+ exportfs -a\n+ setup_lvm_filter\n+ cat\n+ setup_iscsi\n++ hostname\n+ [[ lago-performance-suite-master-engine == *\\-\\s\\t\\o\\r\\a\\g\\e ]]\n++ grep -oP \'(?<=inet ).*(?=/)\'\n++ ip -4 addr show eth1\n+ IP=192.168.200.4\n+ pvcreate --zero n /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3\n+ vgcreate --zero n vg1_storage /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_3\n+ targetcli /iscsi create iqn.2014-07.org.ovirt:storage\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/portals delete 0.0.0.0 3260\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/portals create ip_address=192.168.200.4 ip_port=3260\n++ seq 5\n+ for I in $(seq $NUM_LUNS)\n+ create_lun 0\n+ local ID=0\n+ lvcreate --zero n -L20G -n lun0_bdev vg1_storage\n WARNING: Logical volume vg1_storage/lun0_bdev not zeroed.\n+ targetcli /backstores/block create name=lun0_bdev dev=/dev/vg1_storage/lun0_bdev\n+ targetcli /backstores/block/lun0_bdev set attribute emulate_tpu=1\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun0_bdev\n+ for I in $(seq $NUM_LUNS)\n+ create_lun 1\n+ local ID=1\n+ lvcreate --zero n -L20G -n lun1_bdev vg1_storage\n WARNING: Logical volume vg1_storage/lun1_bdev not zeroed.\n+ targetcli /backstores/block create name=lun1_bdev dev=/dev/vg1_storage/lun1_bdev\n+ targetcli /backstores/block/lun1_bdev set attribute emulate_tpu=1\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun1_bdev\n+ for I in $(seq $NUM_LUNS)\n+ create_lun 2\n+ local ID=2\n+ lvcreate --zero n -L20G -n lun2_bdev vg1_storage\n WARNING: Logical volume vg1_storage/lun2_bdev not zeroed.\n+ targetcli /backstores/block create name=lun2_bdev dev=/dev/vg1_storage/lun2_bdev\n+ targetcli /backstores/block/lun2_bdev set attribute emulate_tpu=1\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun2_bdev\n+ for I in $(seq $NUM_LUNS)\n+ create_lun 3\n+ local ID=3\n+ lvcreate --zero n -L20G -n lun3_bdev vg1_storage\n WARNING: Logical volume vg1_storage/lun3_bdev not zeroed.\n+ targetcli /backstores/block create name=lun3_bdev dev=/dev/vg1_storage/lun3_bdev\n+ targetcli /backstores/block/lun3_bdev set attribute emulate_tpu=1\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun3_bdev\n+ for I in $(seq $NUM_LUNS)\n+ create_lun 4\n+ local ID=4\n+ lvcreate --zero n -L20G -n lun4_bdev vg1_storage\n WARNING: Logical volume vg1_storage/lun4_bdev not zeroed.\n+ targetcli /backstores/block create name=lun4_bdev dev=/dev/vg1_storage/lun4_bdev\n+ targetcli /backstores/block/lun4_bdev set attribute emulate_tpu=1\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1/luns/ create /backstores/block/lun4_bdev\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set auth userid=username password=password\n+ targetcli /iscsi/iqn.2014-07.org.ovirt:storage/tpg1 set attribute demo_mode_write_protect=0 generate_node_acls=1 cache_dynamic_acls=1 default_cmdsn_depth=64\n+ targetcli saveconfig\n+ systemctl enable --now target\nCreated symlink /etc/systemd/system/multi-user.target.wants/target.service \xe2\x86\x92 /usr/lib/systemd/system/target.service.\n+ sed -i \'s/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/g\' /etc/iscsi/iscsid.conf\n+ sed -i \'s/#node.session.auth.username = username/node.session.auth.username = username/g\' /etc/iscsi/iscsid.conf\n+ sed -i \'s/#node.session.auth.password = password/node.session.auth.password = password/g\' /etc/iscsi/iscsid.conf\n+ sed -i \'s/node.conn\\[0\\].timeo.noop_out_timeout = 5/node.conn\\[0\\].timeo.noop_out_timeout = 30/g\' /etc/iscsi/iscsid.conf\n+ iscsiadm -m discovery -t sendtargets -p 192.168.200.4\n+ iscsiadm -m node -L all\n+ rescan-scsi-bus.sh\nwhich: no multipath in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin)\n+ ls /dev/disk/by-id/scsi-360014051b18182623d649bf8a1677508 /dev/disk/by-id/scsi-3600140560dddf47a1c84d65aa1bf2da7 /dev/disk/by-id/scsi-360014057c57a5453bee43088583d8085 /dev/disk/by-id/scsi-36001405c13ab636c31448d393d93d81f /dev/disk/by-id/scsi-36001405f7a38ceb1fce49aeadd415586\n+ cut -d - -f 3\n+ sort\n+ iscsiadm -m node -U all\n+ iscsiadm -m node -o delete\n+ systemctl disable --now iscsi.service\nRemoved /etc/systemd/system/remote-fs.target.wants/iscsi.service.\n+ install_deps_389ds\n+ rpm -q 389-ds-base 389-ds-base-legacy-tools\n+ dnf module -y enable 389-ds\n+ yum install --nogpgcheck -y 389-ds-base 389-ds-base-legacy-tools\nError: Unable to find a match: 389-ds-base-legacy-tools\n\n--------------------- >> end captured logging << ---------------------'
4 years, 3 months