
Seems like [1], as ovirt-srv19 has fresh new FC24 installation, virtlogd is not enabled by default: ● virtlogd.service - Virtual machine log manager Loaded: loaded (/usr/lib/systemd/system/virtlogd.service; indirect; vendor preset: disabled) Active: inactive (dead) Docs: man:virtlogd(8) http://libvirt.org we can add it to puppet for now. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1290357 On Thu, Jul 7, 2016 at 6:49 PM, Eyal Edri <eedri@redhat.com> wrote:
This looks like a bug in libvirt? Tolik mentioned something in a socket name which is too long, anyone seen it before?
15:37:11 libvirt: XML-RPC error : Failed to connect socket to '/var/run/libvirt/virtlogd-sock': No such file or directory 15:37:11 * Starting VM lago_basic_suite_master_storage: ERROR (in 0:00:00) 15:37:11 # Start vms: ERROR (in 0:00:00) 15:37:11 # Destroy network lago_basic_suite_master_lago: 15:37:11 # Destroy network lago_basic_suite_master_lago: ERROR (in 0:00:00) 15:37:11 @ Start Prefix: ERROR (in 0:00:00) 15:37:11 Error occured, aborting 15:37:11 Traceback (most recent call last): 15:37:11 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 691, in main 15:37:11 cli_plugins[args.verb].do_run(args) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 180, in do_run 15:37:11 self._do_run(**vars(args)) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 488, in wrapper 15:37:11 return func(*args, **kwargs) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/utils.py", line 499, in wrapper 15:37:11 return func(*args, prefix=prefix, **kwargs) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 255, in do_start 15:37:11 prefix.start(vm_names=vm_names) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 958, in start 15:37:11 self.virt_env.start(vm_names=vm_names) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/virt.py", line 182, in start 15:37:11 vm.start() 15:37:11 File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 247, in start 15:37:11 return self.provider.start(*args, **kwargs) 15:37:11 File "/usr/lib/python2.7/site-packages/lago/vm.py", line 93, in start 15:37:11 self.libvirt_con.createXML(self._libvirt_xml()) 15:37:11 File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML 15:37:11 if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) 15:37:11 libvirtError: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': No such file or directory 15:37:11 #########################
On Thu, Jul 7, 2016 at 6:37 PM, <jenkins@jenkins.phx.ovirt.org> wrote:
See <http://jenkins.ovirt.org/job/ovirt_master_system-tests/243/changes>
Changes:
[Eyal Edri] add hystrix deps to yum repos include list
[Eyal Edri] refresh fedora versions and release versions for ovirt-engine
[Sandro Bonazzola] ovirt-engine_upgrade-db: drop 3.6.7 jobs
[Shirly Radco] Replacing jpackage repo for 3.6 dwh
------------------------------------------ [...truncated 485 lines...] ## rc = 1 ########################################################## ##! ERROR vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv ##! Last 20 log enties: logs/mocker-fedora-23-x86_64.fc23.basic_suite_master.sh/basic_suite_master.sh.log ##! File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 255, in do_start prefix.start(vm_names=vm_names) File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 958, in start self.virt_env.start(vm_names=vm_names) File "/usr/lib/python2.7/site-packages/lago/virt.py", line 182, in start vm.start() File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line 247, in start return self.provider.start(*args, **kwargs) File "/usr/lib/python2.7/site-packages/lago/vm.py", line 93, in start self.libvirt_con.createXML(self._libvirt_xml()) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3611, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: Failed to connect socket to '/var/run/libvirt/virtlogd-sock': No such file or directory ######################### ======== Cleaning up ----------- Cleaning with lago ----------- Cleaning with lago done ======== Cleanup done Took 197 seconds =================================== ##! ##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ##!######################################################## ########################################################## Build step 'Execute shell' marked build as failure Performing Post build task... Match found for :.* : True Logical operation result is TRUE Running script : #!/bin/bash -xe echo 'shell_scripts/system_tests.collect_logs.sh'
# # Required jjb vars: # version # VERSION=master SUITE_TYPE=
WORKSPACE="$PWD" OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION" TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"
rm -rf "$WORKSPACE/exported-artifacts" mkdir -p "$WORKSPACE/exported-artifacts"
if [[ -d "$TESTS_LOGS" ]]; then mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/" fi
[ovirt_master_system-tests] $ /bin/bash -xe /tmp/hudson703448189995999079.sh + echo shell_scripts/system_tests.collect_logs.sh shell_scripts/system_tests.collect_logs.sh + VERSION=master + SUITE_TYPE= + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/> + OVIRT_SUITE=master + TESTS_LOGS=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/exported-artifacts> + rm -rf <http://jenkins.ovirt.org/job/ovirt_master_system-tests/243/artifact/exported-artifacts> + mkdir -p <http://jenkins.ovirt.org/job/ovirt_master_system-tests/243/artifact/exported-artifacts> + [[ -d <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/exported-artifacts> ]] + mv <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/exported-artifacts/lago_logs> <http://jenkins.ovirt.org/job/ovirt_master_system-tests/243/artifact/exported-artifacts/> POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 0 Match found for :.* : True Logical operation result is TRUE Running script : #!/bin/bash -xe echo "shell-scripts/mock_cleanup.sh"
shopt -s nullglob
WORKSPACE="$PWD"
# Make clear this is the cleanup, helps reading the jenkins logs cat <<EOC _______________________________________________________________________ ####################################################################### # # # CLEANUP # # # ####################################################################### EOC
# Archive the logs, we want them anyway logs=( ./*log ./*/logs ) if [[ "$logs" ]]; then tar cvzf exported-artifacts/logs.tgz "${logs[@]}" rm -rf "${logs[@]}" fi
# stop any processes running inside the chroot failed=false mock_confs=("$WORKSPACE"/*/mocker*) # Clean current jobs mockroot if any for mock_conf_file in "${mock_confs[@]}"; do [[ "$mock_conf_file" ]] || continue echo "Cleaning up mock $mock_conf" mock_root="${mock_conf_file##*/}" mock_root="${mock_root%.*}" my_mock="/usr/bin/mock" my_mock+=" --configdir=${mock_conf_file%/*}" my_mock+=" --root=${mock_root}" my_mock+=" --resultdir=$WORKSPACE"
#TODO: investigate why mock --clean fails to umount certain dirs sometimes, #so we can use it instead of manually doing all this. echo "Killing all mock orphan processes, if any." $my_mock \ --orphanskill \ || { echo "ERROR: Failed to kill orphans on $chroot." failed=true }
mock_root="$(\ grep \ -Po "(?<=config_opts\['root'\] = ')[^']*" \ "$mock_conf_file" \ )" || : [[ "$mock_root" ]] || continue mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || : if [[ "$mounts" ]]; then echo "Found mounted dirs inside the chroot $chroot. Trying to umount." fi for mount in "${mounts[@]}"; do sudo umount --lazy "$mount" \ || { echo "ERROR: Failed to umount $mount." failed=true } done done
# Clean any leftover chroot from other jobs for mock_root in /var/lib/mock/*; do this_chroot_failed=false mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || : if [[ "$mounts" ]]; then echo "Found mounted dirs inside the chroot $mock_root." \ "Trying to umount." fi for mount in "${mounts[@]}"; do sudo umount --lazy "$mount" \ || { echo "ERROR: Failed to umount $mount." failed=true this_chroot_failed=true } done if ! $this_chroot_failed; then sudo rm -rf "$mock_root" fi done
if $failed; then echo "Aborting." exit 1 fi
# remove mock system cache, we will setup proxies to do the caching and this # takes lots of space between runs shopt -u nullglob sudo rm -Rf /var/cache/mock/*
# restore the permissions in the working dir, as sometimes it leaves files # owned by root and then the 'cleanup workspace' from jenkins job fails to # clean and breaks the jobs sudo chown -R "$USER" "$WORKSPACE"
[ovirt_master_system-tests] $ /bin/bash -xe /tmp/hudson505872539550784673.sh + echo shell-scripts/mock_cleanup.sh shell-scripts/mock_cleanup.sh + shopt -s nullglob + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/> + cat _______________________________________________________________________ ####################################################################### # # # CLEANUP # # # ####################################################################### + logs=(./*log ./*/logs) + [[ -n ./ovirt-system-tests/logs ]] + tar cvzf exported-artifacts/logs.tgz ./ovirt-system-tests/logs ./ovirt-system-tests/logs/ ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/stdout_stderr.log ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/state.log ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/build.log ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/root.log ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/state.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/build.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/root.log ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/stdout_stderr.log
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_master.sh/
./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_master.sh/basic_suite_master.sh.log + rm -rf ./ovirt-system-tests/logs + failed=false + mock_confs=("$WORKSPACE"/*/mocker*) + for mock_conf_file in '"${mock_confs[@]}"' + [[ -n <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/mocker-fedora-23-x86_64.fc23.cfg> ]] + echo 'Cleaning up mock ' Cleaning up mock + mock_root=mocker-fedora-23-x86_64.fc23.cfg + mock_root=mocker-fedora-23-x86_64.fc23 + my_mock=/usr/bin/mock + my_mock+=' --configdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests'> + my_mock+=' --root=mocker-fedora-23-x86_64.fc23' + my_mock+=' --resultdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/'> + echo 'Killing all mock orphan processes, if any.' Killing all mock orphan processes, if any. + /usr/bin/mock --configdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests> --root=mocker-fedora-23-x86_64.fc23 --resultdir=<http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/> --orphanskill WARNING: Could not find required logging config file: <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/logging.ini.> Using default... INFO: mock.py version 1.2.18 starting (python version = 3.5.1)... Start: init plugins INFO: selinux enabled Finish: init plugins Start: run Finish: run ++ grep -Po '(?<=config_opts\['\''root'\''\] = '\'')[^'\'']*' <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/ovirt-system-tests/mocker-fedora-23-x86_64.fc23.cfg> + mock_root=fedora-23-x86_64-4ddbe48c8f8b7d8c2c3635b52313f04a + [[ -n fedora-23-x86_64-4ddbe48c8f8b7d8c2c3635b52313f04a ]] + mounts=($(mount | awk '{print $3}' | grep "$mock_root")) ++ mount ++ awk '{print $3}' ++ grep fedora-23-x86_64-4ddbe48c8f8b7d8c2c3635b52313f04a + : + [[ -n '' ]] + false + shopt -u nullglob + sudo rm -Rf /var/cache/mock/fedora-23-x86_64-4ddbe48c8f8b7d8c2c3635b52313f04a + sudo chown -R jenkins <http://jenkins.ovirt.org/job/ovirt_master_system-tests/ws/> POST BUILD TASK : SUCCESS END OF POST BUILD TASK : 1 Recording test results ERROR: Step ?Publish JUnit test result report? failed: No test report files were found. Configuration error? Archiving artifacts _______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra
-- Eyal Edri Associate Manager RHEV DevOps EMEA ENG Virtualization R&D Red Hat Israel
phone: +972-9-7692018 irc: eedri (on #tlv #rhev-dev #rhev-integ)
_______________________________________________ Infra mailing list Infra@ovirt.org http://lists.ovirt.org/mailman/listinfo/infra