Can't run basic-suite-master in Lago
by Milan Zamazal
Hi, I tried to run basic-suite-master from current master branch of
ovirt-system-tests. I run it on CentOS 7 with the following Lago
packages installed:
lago-0.28.0-1.el7.centos.noarch
python-lago-ovirt-0.28.0-1.el7.centos.noarch
python-lago-0.28.0-1.el7.centos.noarch
lago-ovirt-0.28.0-1.el7.centos.noarch
It doesn't work for me, I receive the following error:
$ ./run_suite.sh basic-suite-master
+ CLI=lago
+ DO_CLEANUP=false
+ RECOMMENDED_RAM_IN_MB=8196
+ EXTRA_SOURCES=()
++ getopt -o ho:e:n:b:cs:r: --long help,output:,engine:,node:,boot-iso:,cleanup --long extra-rpm-source,reposync-config: -n run_suite.sh -- basic-suite-master
+ options=' -- '\''basic-suite-master'\'''
+ [[ 0 != \0 ]]
+ eval set -- ' -- '\''basic-suite-master'\'''
++ set -- -- basic-suite-master
+ true
+ case $1 in
+ shift
+ break
+ [[ -z basic-suite-master ]]
++ realpath basic-suite-master
+ export SUITE=/var/lib/lago/ovirt-system-tests/basic-suite-master
+ SUITE=/var/lib/lago/ovirt-system-tests/basic-suite-master
+ '[' -z '' ']'
+ export PREFIX=/home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ PREFIX=/home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ false
+ [[ -d /var/lib/lago/ovirt-system-tests/basic-suite-master ]]
+ echo '################# lago version'
################# lago version
+ lago --version
lago 0.28.0
+ echo '#################'
#################
+ check_ram 8196
+ local recommended=8196
++ free -m
++ grep Mem
++ awk '{print $2}'
+ local cur_ram=7822
+ [[ 7822 -lt 8196 ]]
+ echo 'It'\''s recommended to have at least 8196MB of RAM' 'installed on the system to run the system tests, if you find' 'issues while running them, consider upgrading your system.' '(only detected 7822MB installed)'
It's recommended to have at least 8196MB of RAM installed on the system to run the system tests, if you find issues while running them, consider upgrading your system. (only detected 7822MB installed)
+ echo 'Running suite found in /var/lib/lago/ovirt-system-tests/basic-suite-master'
Running suite found in /var/lib/lago/ovirt-system-tests/basic-suite-master
+ echo 'Environment will be deployed at /home/pdm/ovirt-system-tests/deployment-basic-suite-master'
Environment will be deployed at /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ rm -rf /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ source /var/lib/lago/ovirt-system-tests/basic-suite-master/control.sh
+ prep_suite '' '' ''
+ local suite_name=basic-suite-master
+ suite_name=basic-suite-master
+ sed -r -e s,__ENGINE__,lago-basic-suite-master-engine,g -e 's,__HOST([0-9]+)__,lago-basic-suite-master-host\1,g' -e s,__LAGO_NET__,lago-basic-suite-master-lago,g -e s,__STORAGE__,lago-basic-suite-master-storage,g
+ run_suite
+ env_init '' /var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile
+ echo '#########################'
#########################
+ local template_repo=/var/lib/lago/ovirt-system-tests/basic-suite-master/template-repo.json
+ local initfile=/var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile
+ lago init /home/pdm/ovirt-system-tests/deployment-basic-suite-master /var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile --template-repo-path /var/lib/lago/ovirt-system-tests/basic-suite-master/template-repo.json
@ Initialize and populate prefix:
# Initialize prefix:
* Create prefix dirs:
* Create prefix dirs: Success (in 0:00:00)
* Generate prefix uuid:
* Generate prefix uuid: Success (in 0:00:00)
* Create ssh keys:
* Create ssh keys: Success (in 0:00:00)
* Tag prefix as initialized:
* Tag prefix as initialized: Success (in 0:00:00)
# Initialize prefix: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host0:
* Create disk root:
* Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host0: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine:
* Create disk root:
* Create disk root: Success (in 0:00:00)
* Create disk nfs:
* Create disk nfs: Success (in 0:00:00)
* Create disk export:
* Create disk export: Success (in 0:00:00)
* Create disk iscsi:
* Create disk iscsi: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host1:
* Create disk root:
* Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host1: Success (in 0:00:00)
# Copying any deploy scripts:
# Copying any deploy scripts: Success (in 0:00:00)
# [Thread-1] Bootstrapping lago-basic-suite-master-host0:
# [Thread-3] Bootstrapping lago-basic-suite-master-host1:
# [Thread-2] Bootstrapping lago-basic-suite-master-engine:
# [Thread-2] Bootstrapping lago-basic-suite-master-engine: Success (in 0:00:13)
# [Thread-3] Bootstrapping lago-basic-suite-master-host1: Success (in 0:00:14)
# [Thread-1] Bootstrapping lago-basic-suite-master-host0: Success (in 0:00:14)
# Save prefix:
* Save nets:
* Save nets: Success (in 0:00:00)
* Save VMs:
* Save VMs: Success (in 0:00:00)
* Save env:
* Save env: Success (in 0:00:00)
# Save prefix: Success (in 0:00:00)
@ Initialize and populate prefix: Success (in 0:00:15)
+ env_repo_setup
+ echo '#########################'
#########################
+ local extrasrc
+ declare -a extrasrcs
+ cd /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ local reposync_conf=/var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
+ [[ -e '' ]]
+ echo 'using reposync config file: /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo'
using reposync config file: /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
+ lago ovirt reposetup --reposync-yum-config /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
@ Create prefix internal repo:
# Syncing remote repos locally (this might take some time):
* Acquiring lock for /var/lib/lago/reposync/repolock:
* Acquiring lock for /var/lib/lago/reposync/repolock: Success (in 0:00:00)
* Running reposync:
* Running reposync: Success (in 0:00:06)
# Syncing remote repos locally (this might take some time): Success (in 0:00:06)
# Running repoman:
# Running repoman: Success (in 0:00:05)
# Save prefix:
* Save nets:
* Save nets: Success (in 0:00:00)
* Save VMs:
* Save VMs: Success (in 0:00:00)
* Save env:
* Save env: Success (in 0:00:00)
# Save prefix: Success (in 0:00:00)
@ Create prefix internal repo: Success (in 0:00:12)
+ cd -
/home/pdm/ovirt-system-tests
+ env_start
+ echo '#########################'
#########################
+ cd /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ lago start
@ Start Prefix:
# Start nets:
* Create network lago-basic-suite-master-lago:
* Create network lago-basic-suite-master-lago: Success (in 0:00:05)
# Start nets: Success (in 0:00:05)
# Start vms:
* Starting VM lago-basic-suite-master-host0:
* Starting VM lago-basic-suite-master-host0: Success (in 0:00:00)
* Starting VM lago-basic-suite-master-engine:
* Starting VM lago-basic-suite-master-engine: Success (in 0:00:00)
* Starting VM lago-basic-suite-master-host1:
* Starting VM lago-basic-suite-master-host1: Success (in 0:00:00)
# Start vms: Success (in 0:00:00)
@ Start Prefix: Success (in 0:00:06)
+ cd -
/home/pdm/ovirt-system-tests
+ env_deploy
+ echo '#########################'
#########################
+ cd /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ lago ovirt deploy
@ Deploy oVirt environment:
# Deploy environment:
* [Thread-2] Deploy VM lago-basic-suite-master-host0:
* [Thread-3] Deploy VM lago-basic-suite-master-engine:
* [Thread-4] Deploy VM lago-basic-suite-master-host1:
- STDERR
++ uname -r
++ sed -r 's/^.*\.([^\.]+)\.[^\.]+$/\1/'
+ DIST=el7
++ ip -4 addr show scope global up
++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] ".1"}'
++ grep -m1 inet
+ ADDR=192.168.200.1
+ cat
+ sed -i 's/var\/cache/dev\/shm/g' /etc/yum.conf
+ echo persistdir=/dev/shm
+ yum install '--disablerepo=*' --enablerepo=alocalsync -y yum-utils
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: libxml2-python
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: yum >= 3.4.3-143
Installed: yum-3.4.3-132.el7.centos.0.1.noarch (installed)
yum = 3.4.3-132.el7.centos.0.1
* [Thread-2] Deploy VM lago-basic-suite-master-host0: ERROR (in 0:00:35)
Error while running thread
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in _ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in _deploy_host
host.name(),
RuntimeError: /var/lib/lago/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_var_lib_lago_ovirt-system-tests_basic-suite-master_.._common_deploy-scripts_add_local_repo.sh failed with status 1 on lago-basic-suite-master-host0
- STDERR
++ uname -r
++ sed -r 's/^.*\.([^\.]+)\.[^\.]+$/\1/'
+ DIST=el7
++ ip -4 addr show scope global up
++ grep -m1 inet
++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] ".1"}'
+ ADDR=192.168.200.1
+ cat
+ sed -i 's/var\/cache/dev\/shm/g' /etc/yum.conf
+ echo persistdir=/dev/shm
+ yum install '--disablerepo=*' --enablerepo=alocalsync -y yum-utils
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: libxml2-python
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: yum >= 3.4.3-143
Installed: yum-3.4.3-132.el7.centos.0.1.noarch (installed)
yum = 3.4.3-132.el7.centos.0.1
* [Thread-3] Deploy VM lago-basic-suite-master-engine: ERROR (in 0:00:37)
Error while running thread
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in _ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in _deploy_host
host.name(),
RuntimeError: /var/lib/lago/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_var_lib_lago_ovirt-system-tests_basic-suite-master_.._common_deploy-scripts_add_local_repo.sh failed with status 1 on lago-basic-suite-master-engine
- STDERR
++ uname -r
++ sed -r 's/^.*\.([^\.]+)\.[^\.]+$/\1/'
+ DIST=el7
++ ip -4 addr show scope global up
++ awk '{split($4,a,"."); print a[1] "." a[2] "." a[3] ".1"}'
++ grep -m1 inet
+ ADDR=192.168.200.1
+ cat
+ sed -i 's/var\/cache/dev\/shm/g' /etc/yum.conf
+ echo persistdir=/dev/shm
+ yum install '--disablerepo=*' --enablerepo=alocalsync -y yum-utils
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: libxml2-python
Error: Package: yum-utils-1.1.31-40.el7.noarch (alocalsync)
Requires: yum >= 3.4.3-143
Installed: yum-3.4.3-132.el7.centos.0.1.noarch (installed)
yum = 3.4.3-132.el7.centos.0.1
* [Thread-4] Deploy VM lago-basic-suite-master-host1: ERROR (in 0:00:37)
Error while running thread
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in _ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in _deploy_host
host.name(),
RuntimeError: /var/lib/lago/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_var_lib_lago_ovirt-system-tests_basic-suite-master_.._common_deploy-scripts_add_local_repo.sh failed with status 1 on lago-basic-suite-master-host1
# Deploy environment: ERROR (in 0:00:37)
@ Deploy oVirt environment: ERROR (in 0:00:37)
Error occured, aborting
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 264, in do_run
self.cli_plugins[args.ovirtverb].do_run(args)
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in do_run
self._do_run(**vars(args))
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in wrapper
return func(*args, prefix=prefix, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line 187, in do_deploy
prefix.deploy()
File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py", line 67, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/__init__.py", line 198, in deploy
return super(OvirtPrefix, self).deploy()
File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line 621, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1249, in deploy
self._deploy_host, self.virt_env.get_vms().values()
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 97, in invoke_in_parallel
vt.join_all()
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 55, in _ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1242, in _deploy_host
host.name(),
RuntimeError: /var/lib/lago/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_var_lib_lago_ovirt-system-tests_basic-suite-master_.._common_deploy-scripts_add_local_repo.sh failed with status 1 on lago-basic-suite-master-host0
When I try to rerun the suite, I receive a different error:
$ cd deployment-basic-suite-master/
$ lagocli cleanup
@ Cleanup prefix:
# Stop prefix:
# Stop prefix:
* Stop vms:
* Stop vms: Success (in 0:00:00)
* Stop nets:
* Stop nets: Success (in 0:00:00)
# Stop prefix: Success (in 0:00:00)
# Tag prefix as uninitialized:
# Tag prefix as uninitialized: Success (in 0:00:00)
@ Cleanup prefix: Success (in 0:00:00)
$ cd ..
$ ./run_suite.sh basic-suite-master
+ CLI=lago
+ DO_CLEANUP=false
+ RECOMMENDED_RAM_IN_MB=8196
+ EXTRA_SOURCES=()
++ getopt -o ho:e:n:b:cs:r: --long help,output:,engine:,node:,boot-iso:,cleanup --long extra-rpm-source,reposync-config: -n run_suite.sh -- basic-suite-master
+ options=' -- '\''basic-suite-master'\'''
+ [[ 0 != \0 ]]
+ eval set -- ' -- '\''basic-suite-master'\'''
++ set -- -- basic-suite-master
+ true
+ case $1 in
+ shift
+ break
+ [[ -z basic-suite-master ]]
++ realpath basic-suite-master
+ export SUITE=/var/lib/lago/ovirt-system-tests/basic-suite-master
+ SUITE=/var/lib/lago/ovirt-system-tests/basic-suite-master
+ '[' -z '' ']'
+ export PREFIX=/home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ PREFIX=/home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ false
+ [[ -d /var/lib/lago/ovirt-system-tests/basic-suite-master ]]
+ echo '################# lago version'
################# lago version
+ lago --version
lago 0.28.0
+ echo '#################'
#################
+ check_ram 8196
+ local recommended=8196
++ free -m
++ grep Mem
++ awk '{print $2}'
+ local cur_ram=7822
+ [[ 7822 -lt 8196 ]]
+ echo 'It'\''s recommended to have at least 8196MB of RAM' 'installed on the system to run the system tests, if you find' 'issues while running them, consider upgrading your system.' '(only detected 7822MB installed)'
It's recommended to have at least 8196MB of RAM installed on the system to run the system tests, if you find issues while running them, consider upgrading your system. (only detected 7822MB installed)
+ echo 'Running suite found in /var/lib/lago/ovirt-system-tests/basic-suite-master'
Running suite found in /var/lib/lago/ovirt-system-tests/basic-suite-master
+ echo 'Environment will be deployed at /home/pdm/ovirt-system-tests/deployment-basic-suite-master'
Environment will be deployed at /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ rm -rf /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ source /var/lib/lago/ovirt-system-tests/basic-suite-master/control.sh
+ prep_suite '' '' ''
+ local suite_name=basic-suite-master
+ suite_name=basic-suite-master
+ sed -r -e s,__ENGINE__,lago-basic-suite-master-engine,g -e 's,__HOST([0-9]+)__,lago-basic-suite-master-host\1,g' -e s,__LAGO_NET__,lago-basic-suite-master-lago,g -e s,__STORAGE__,lago-basic-suite-master-storage,g
+ run_suite
+ env_init '' /var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile
+ echo '#########################'
#########################
+ local template_repo=/var/lib/lago/ovirt-system-tests/basic-suite-master/template-repo.json
+ local initfile=/var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile
+ lago init /home/pdm/ovirt-system-tests/deployment-basic-suite-master /var/lib/lago/ovirt-system-tests/basic-suite-master/LagoInitFile --template-repo-path /var/lib/lago/ovirt-system-tests/basic-suite-master/template-repo.json
@ Initialize and populate prefix:
# Initialize prefix:
* Create prefix dirs:
* Create prefix dirs: Success (in 0:00:00)
* Generate prefix uuid:
* Generate prefix uuid: Success (in 0:00:00)
* Create ssh keys:
* Create ssh keys: Success (in 0:00:00)
* Tag prefix as initialized:
* Tag prefix as initialized: Success (in 0:00:00)
# Initialize prefix: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host0:
* Create disk root:
* Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host0: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine:
* Create disk root:
* Create disk root: Success (in 0:00:00)
* Create disk nfs:
* Create disk nfs: Success (in 0:00:00)
* Create disk export:
* Create disk export: Success (in 0:00:00)
* Create disk iscsi:
* Create disk iscsi: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host1:
* Create disk root:
* Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host1: Success (in 0:00:00)
# Copying any deploy scripts:
# Copying any deploy scripts: Success (in 0:00:00)
# [Thread-1] Bootstrapping lago-basic-suite-master-host0:
# [Thread-2] Bootstrapping lago-basic-suite-master-engine:
# [Thread-3] Bootstrapping lago-basic-suite-master-host1:
# [Thread-1] Bootstrapping lago-basic-suite-master-host0: Success (in 0:00:16)
# [Thread-3] Bootstrapping lago-basic-suite-master-host1: Success (in 0:00:17)
# [Thread-2] Bootstrapping lago-basic-suite-master-engine: Success (in 0:00:17)
# Save prefix:
* Save nets:
* Save nets: Success (in 0:00:00)
* Save VMs:
* Save VMs: Success (in 0:00:00)
* Save env:
* Save env: Success (in 0:00:00)
# Save prefix: Success (in 0:00:00)
@ Initialize and populate prefix: Success (in 0:00:18)
+ env_repo_setup
+ echo '#########################'
#########################
+ local extrasrc
+ declare -a extrasrcs
+ cd /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ local reposync_conf=/var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
+ [[ -e '' ]]
+ echo 'using reposync config file: /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo'
using reposync config file: /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
+ lago ovirt reposetup --reposync-yum-config /var/lib/lago/ovirt-system-tests/basic-suite-master/reposync-config.repo
@ Create prefix internal repo:
# Syncing remote repos locally (this might take some time):
* Acquiring lock for /var/lib/lago/reposync/repolock:
* Acquiring lock for /var/lib/lago/reposync/repolock: Success (in 0:00:00)
* Running reposync:
* Running reposync: Success (in 0:00:06)
# Syncing remote repos locally (this might take some time): Success (in 0:00:06)
# Running repoman:
# Running repoman: Success (in 0:00:05)
# Save prefix:
* Save nets:
* Save nets: Success (in 0:00:00)
* Save VMs:
* Save VMs: Success (in 0:00:00)
* Save env:
* Save env: Success (in 0:00:00)
# Save prefix: Success (in 0:00:00)
@ Create prefix internal repo: Success (in 0:00:12)
+ cd -
/home/pdm/ovirt-system-tests
+ env_start
+ echo '#########################'
#########################
+ cd /home/pdm/ovirt-system-tests/deployment-basic-suite-master
+ lago start
@ Start Prefix:
# Start nets:
* Create network lago-basic-suite-master-lago:
libvirt: error : internal error: Network is already in use by interface 2584-2e38ffa
* Create network lago-basic-suite-master-lago: ERROR (in 0:00:00)
# Start nets: ERROR (in 0:00:00)
@ Start Prefix: ERROR (in 0:00:00)
Error occured, aborting
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 822, in main
cli_plugins[args.verb].do_run(args)
File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py", line 184, in do_run
self._do_run(**vars(args))
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 489, in wrapper
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 500, in wrapper
return func(*args, prefix=prefix, **kwargs)
File "/usr/lib/python2.7/site-packages/lago/cmd.py", line 258, in do_start
prefix.start(vm_names=vm_names)
File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 949, in start
self.virt_env.start(vm_names=vm_names)
File "/usr/lib/python2.7/site-packages/lago/virt.py", line 186, in start
net.start()
File "/usr/lib/python2.7/site-packages/lago/virt.py", line 358, in start
self.libvirt_con.networkCreateXML(self._libvirt_xml())
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 4262, in networkCreateXML
if ret is None:raise libvirtError('virNetworkCreateXML() failed', conn=self)
libvirtError: internal error: Network is already in use by interface 2584-2e38ffa
7 years, 11 months
[JIRA] (OVIRT-933) Fix repoproxy service and logging mess
by Evgheni Dereveanchin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-933?page=com.atlassian.jira... ]
Evgheni Dereveanchin reassigned OVIRT-933:
------------------------------------------
Assignee: Evgheni Dereveanchin (was: infra)
> Fix repoproxy service and logging mess
> --------------------------------------
>
> Key: OVIRT-933
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-933
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Reporter: Barak Korren
> Assignee: Evgheni Dereveanchin
> Priority: High
>
> There are a couple of issues with the way repoproxy runs right now, these issues probably cause it to be less stable then it can:
> # repoproxy is can currently be invoked only from puppet, because a lot of the data on how to run it is stored right now in puppet and nowhere else. We should make puppet create a systemd unit file to run repoproxy instead of running it itself. This way we can get rid of the cron job that runs puppet every 2 minutes on the proxy.
> # repoproxy is configured to run as the '{{squid}}' user and log to '{{/var/log/repoproxy.log}}'. That file is pre-created with the right permissions bu Puppet so repoproxy can write to it. However, the Python logger in repoproxy is configured to try and rotate the log when it becomes too big. That, in turn, will cause it to crash because the '{{squid}}' user cannot rename file in '{{/var/log}}'. We should move repoproxy logs somewhere else.
--
This message was sent by Atlassian JIRA
(v1000.620.0#100023)
7 years, 11 months
[JIRA] (OVIRT-933) Fix repoproxy service and logging mess
by eyal edri [Administrator] (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-933?page=com.atlassian.jira... ]
eyal edri [Administrator] commented on OVIRT-933:
-------------------------------------------------
[~ederevea] didn't we disable or stopped using the repo proxy? I remember another ticket on it.
> Fix repoproxy service and logging mess
> --------------------------------------
>
> Key: OVIRT-933
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-933
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Reporter: Barak Korren
> Assignee: infra
> Priority: High
>
> There are a couple of issues with the way repoproxy runs right now, these issues probably cause it to be less stable then it can:
> # repoproxy is can currently be invoked only from puppet, because a lot of the data on how to run it is stored right now in puppet and nowhere else. We should make puppet create a systemd unit file to run repoproxy instead of running it itself. This way we can get rid of the cron job that runs puppet every 2 minutes on the proxy.
> # repoproxy is configured to run as the '{{squid}}' user and log to '{{/var/log/repoproxy.log}}'. That file is pre-created with the right permissions bu Puppet so repoproxy can write to it. However, the Python logger in repoproxy is configured to try and rotate the log when it becomes too big. That, in turn, will cause it to crash because the '{{squid}}' user cannot rename file in '{{/var/log}}'. We should move repoproxy logs somewhere else.
--
This message was sent by Atlassian JIRA
(v1000.620.0#100023)
7 years, 11 months
[JIRA] (OVIRT-933) Fix repoproxy service and logging mess
by Yaniv Kaul (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-933?page=com.atlassian.jira... ]
Yaniv Kaul updated OVIRT-933:
-----------------------------
Priority: High (was: Medium)
> Fix repoproxy service and logging mess
> --------------------------------------
>
> Key: OVIRT-933
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-933
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Reporter: Barak Korren
> Assignee: infra
> Priority: High
>
> There are a couple of issues with the way repoproxy runs right now, these issues probably cause it to be less stable then it can:
> # repoproxy is can currently be invoked only from puppet, because a lot of the data on how to run it is stored right now in puppet and nowhere else. We should make puppet create a systemd unit file to run repoproxy instead of running it itself. This way we can get rid of the cron job that runs puppet every 2 minutes on the proxy.
> # repoproxy is configured to run as the '{{squid}}' user and log to '{{/var/log/repoproxy.log}}'. That file is pre-created with the right permissions bu Puppet so repoproxy can write to it. However, the Python logger in repoproxy is configured to try and rotate the log when it becomes too big. That, in turn, will cause it to crash because the '{{squid}}' user cannot rename file in '{{/var/log}}'. We should move repoproxy logs somewhere else.
--
This message was sent by Atlassian JIRA
(v1000.620.0#100023)
7 years, 11 months
[JIRA] (OVIRT-933) Fix repoproxy service and logging mess
by Yaniv Kaul (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-933?page=com.atlassian.jira... ]
Yaniv Kaul commented on OVIRT-933:
----------------------------------
Moving priority to High, based on 'will cause it to crash' statement above.
> Fix repoproxy service and logging mess
> --------------------------------------
>
> Key: OVIRT-933
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-933
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Reporter: Barak Korren
> Assignee: infra
> Priority: High
>
> There are a couple of issues with the way repoproxy runs right now, these issues probably cause it to be less stable then it can:
> # repoproxy is can currently be invoked only from puppet, because a lot of the data on how to run it is stored right now in puppet and nowhere else. We should make puppet create a systemd unit file to run repoproxy instead of running it itself. This way we can get rid of the cron job that runs puppet every 2 minutes on the proxy.
> # repoproxy is configured to run as the '{{squid}}' user and log to '{{/var/log/repoproxy.log}}'. That file is pre-created with the right permissions bu Puppet so repoproxy can write to it. However, the Python logger in repoproxy is configured to try and rotate the log when it becomes too big. That, in turn, will cause it to crash because the '{{squid}}' user cannot rename file in '{{/var/log}}'. We should move repoproxy logs somewhere else.
--
This message was sent by Atlassian JIRA
(v1000.620.0#100023)
7 years, 11 months
[JIRA] (OVIRT-933) Fix repoproxy service and logging mess
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-933?page=com.atlassian.jira... ]
Barak Korren updated OVIRT-933:
-------------------------------
Summary: Fix repoproxy service and logging mess (was: Fixe repoproxy service and logging mess)
> Fix repoproxy service and logging mess
> --------------------------------------
>
> Key: OVIRT-933
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-933
> Project: oVirt - virtualization made easy
> Issue Type: Improvement
> Reporter: Barak Korren
> Assignee: infra
>
> There are a couple of issues with the way repoproxy runs right now, these issues probably cause it to be less stable then it can:
> # repoproxy is can currently be invoked only from puppet, because a lot of the data on how to run it is stored right now in puppet and nowhere else. We should make puppet create a systemd unit file to run repoproxy instead of running it itself. This way we can get rid of the cron job that runs puppet every 2 minutes on the proxy.
> # repoproxy is configured to run as the '{{squid}}' user and log to '{{/var/log/repoproxy.log}}'. That file is pre-created with the right permissions bu Puppet so repoproxy can write to it. However, the Python logger in repoproxy is configured to try and rotate the log when it becomes too big. That, in turn, will cause it to crash because the '{{squid}}' user cannot rename file in '{{/var/log}}'. We should move repoproxy logs somewhere else.
--
This message was sent by Atlassian JIRA
(v1000.620.0#100023)
7 years, 11 months