Killing XMLRPC on master
by Nir Soffer
Hi all,
We are still wasting time on maintaining both xmlrpc and jsonrpc. If we kill
xmlrpc, we can greatly simplify the code base, making it easier to mainain
and add new features.
I suggest to kill xmlrpc in 4.1, and disable it *now* on master.
Currently the we have 3 issues:
1. Mom is still using xmlrpc
Mom must move to jsonrpc.
Martin: can you update on progress of this work?
2. sos plugin using vdsClient
Need to port it to use jsonrpc library, or jsonrpc client
New jsonrpc client: https://gerrit.ovirt.org/35181
3. Engine is using xmlrpc server for ovf upload/download
We must support current engine in 4.1, so we cannot remove
upload/download feature in this version, but we can remove the
xmlrpc support in this server.
Currently we abuse the xmlrpc server, supporting PUT and GET for
upload and download (XMLRPC is using only POST). We can disable
POST requests in protocoldetector, and not register anything with
the xmlrpc server.
Thoughts?
Nir
8 years, 5 months
Multiple versions used in oVirt JAVA libraries
by Eyal Edri
Hi,
Using the latest tool we deployed to finding various issues in oVirt,. I
found one of the reports were on multiple occurances of the same library
but using different versions, can someone verify if this is true and if we
need to align all to use the same version?
Library Type Description
commons-logging-1.1.1.jar
Multiple Library Versions
Version 1.2 is being used in project engine-server-ear
xml-apis-1.0.b2.jar
Multiple Library Versions
Version 1.3.04 is being used in project restapi-jaxrs
commons-logging-1.0.4.jar
Multiple Library Versions
Version 1.2 is being used in project engine-server-ear
commons-beanutils-1.7.0.jar
Multiple Library Versions
Version 1.8.2 is being used in project interface-common-jaxrs
commons-beanutils-core-1.8.0.jar
Multiple Library Versions
Version 1.8.3 is being used in project ovirt-checkstyle-extension
jboss-logging-3.1.1.GA.jar
Multiple Library Versions
Version 3.1.4.GA is being used in project interface-common-jaxrs
jackson-jaxrs-1.9.4.jar
Multiple Library Versions
Version 1.9.9 is being used in project restapi-jaxrs
jboss-logging-3.1.2.GA.jar
Multiple Library Versions
Version 3.1.4.GA is being used in project interface-common-jaxrs
jaxrs-api-2.1.0.GA.jar
Multiple Library Versions
Version 2.3.2.Final is being used in project engine-server-ear
--
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
8 years, 5 months
[VDSM] New network test failing randomly
by Nir Soffer
Hi all,
Seen this new failure yesterday:
20:44:20 ======================================================================
20:44:20 FAIL: testGetDeviceByIP (network.netinfo_test.TestNetinfo)
20:44:20 ----------------------------------------------------------------------
20:44:20 Traceback (most recent call last):
20:44:20 File
"/home/jenkins/workspace/vdsm_master_check-patch-fc23-x86_64/vdsm/tests/network/netinfo_test.py",
line 144, in testGetDeviceByIP
20:44:20 addresses.getDeviceByIP(addr['address'].split('/')[0]))
20:44:20 AssertionError: 'bond1' != 'eno2'
http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc23-x86_64/3451/con...
The did not fail on the next run.
Nir
8 years, 5 months
[VDSM] Failing tests on fresh install
by Edward Haas
Hello,
I got the error below when running make check after a fresh install.
Has anyone seen this before?
Thanks,
Edy.
======================================================================
ERROR: testAlignedAppend (miscTests.DdWatchCopy)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/root/workspace/vdsm/tests/miscTests.py", line 373, in
testAlignedAppend
"/dev/zero", path, None, misc.MEGA, os.stat(path).st_size)
File "/root/workspace/vdsm/lib/vdsm/storage/misc.py", line 308, in
ddWatchCopy
deathSignal=signal.SIGKILL)
File "/root/workspace/vdsm/lib/vdsm/commands.py", line 71, in execCmd
deathSignal=deathSignal)
File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 63, in
__init__
**kw)
File "/usr/lib64/python2.7/subprocess.py", line 711, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 82, in
_execute_child_v276
restore_sigpipe=restore_sigpipe
File "/usr/lib64/python2.7/site-packages/cpopen/__init__.py", line 107,
in _execute_child_v275
restore_sigpipe
OSError: [Errno 0] Error
-------------------- >> begin captured logging << --------------------
2016-06-28 03:51:05,481 DEBUG [Storage.Misc.excCmd] (MainThread)
/usr/bin/taskset --cpu-list 0-0 /usr/bin/nice -n 19 /usr/bin/ionice -c 3
/usr/bin/dd if=/dev/zero of=/var/tmp/tmp_HJ3St bs=1048576 seek=1 skip=1
conv=notrunc count=1 oflag=direct (cwd None)
--------------------- >> end captured logging << ---------------------
----------------------------------------------------------------------
Ran 2468 tests in 126.028s
FAILED (SKIP=133, errors=1)
8 years, 5 months
Re: [ovirt-devel] Build failed in Jenkins: ovirt_3.6_system-tests #168
by Nir Soffer
I think this was sent by mistake to devel(a)ovirt.org, please do not
send such message to this list.
On Tue, Jun 28, 2016 at 7:49 PM, <jenkins(a)jenkins.phx.ovirt.org> wrote:
> See <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/168/changes>
>
> Changes:
>
> [Ondra Machacek] Deploy FreeIPA to storage machine
>
> ------------------------------------------
> [...truncated 554 lines...]
> ## took 2194 seconds
> ## rc = 1
> ##########################################################
> ##! ERROR vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
> ##! Last 20 log enties: logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/basic_suite_3.6.sh.log
> ##!
> @ Collect artifacts:
> # ovirt-role metadata entry will be soon deprecated, instead you should use the vm-provider entry in the domain definiton and set it no one of: ovirt-node, ovirt-engine, ovirt-host
> # [Thread-1] lago_basic_suite_3_6_engine:
> # [Thread-2] lago_basic_suite_3_6_host1:
> # [Thread-3] lago_basic_suite_3_6_host0:
> # [Thread-4] lago_basic_suite_3_6_storage:
> # [Thread-3] lago_basic_suite_3_6_host0: Success (in 0:00:16)
> # [Thread-2] lago_basic_suite_3_6_host1: Success (in 0:00:16)
> # [Thread-1] lago_basic_suite_3_6_engine: Success (in 0:00:16)
> # [Thread-4] lago_basic_suite_3_6_storage: Success (in 0:00:16)
> @ Collect artifacts: Success (in 0:00:16)
> <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests>
> @@@@ ERROR: Failed running <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...>
> #########################
> ======== Cleaning up
> ----------- Cleaning with lago
> ----------- Cleaning with lago done
> ======== Cleanup done
> Took 2033 seconds
> ===================================
> ##!
> ##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> ##!########################################################
> ##########################################################
> Build step 'Execute shell' marked build as failure
> Performing Post build task...
> Match found for :.* : True
> Logical operation result is TRUE
> Running script : #!/bin/bash -xe
> echo 'shell_scripts/system_tests.collect_logs.sh'
>
> #
> # Required jjb vars:
> # version
> #
> VERSION=3.6
> SUITE_TYPE=
>
> WORKSPACE="$PWD"
> OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
> TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"
>
> rm -rf "$WORKSPACE/exported-artifacts"
> mkdir -p "$WORKSPACE/exported-artifacts"
>
> if [[ -d "$TESTS_LOGS" ]]; then
> mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
> fi
>
> [ovirt_3.6_system-tests] $ /bin/bash -xe /tmp/hudson3155110657872379488.sh
> + echo shell_scripts/system_tests.collect_logs.sh
> shell_scripts/system_tests.collect_logs.sh
> + VERSION=3.6
> + SUITE_TYPE=
> + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/>
> + OVIRT_SUITE=3.6
> + TESTS_LOGS=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...>
> + rm -rf <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/168/artifact/exported...>
> + mkdir -p <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/168/artifact/exported...>
> + [[ -d <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> ]]
> + mv <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/168/artifact/exported...>
> POST BUILD TASK : SUCCESS
> END OF POST BUILD TASK : 0
> Match found for :.* : True
> Logical operation result is TRUE
> Running script : #!/bin/bash -xe
> echo "shell-scripts/mock_cleanup.sh"
>
> shopt -s nullglob
>
>
> WORKSPACE="$PWD"
>
> # Make clear this is the cleanup, helps reading the jenkins logs
> cat <<EOC
> _______________________________________________________________________
> #######################################################################
> # #
> # CLEANUP #
> # #
> #######################################################################
> EOC
>
>
> # Archive the logs, we want them anyway
> logs=(
> ./*log
> ./*/logs
> )
> if [[ "$logs" ]]; then
> tar cvzf exported-artifacts/logs.tgz "${logs[@]}"
> rm -rf "${logs[@]}"
> fi
>
> # stop any processes running inside the chroot
> failed=false
> mock_confs=("$WORKSPACE"/*/mocker*)
> # Clean current jobs mockroot if any
> for mock_conf_file in "${mock_confs[@]}"; do
> [[ "$mock_conf_file" ]] || continue
> echo "Cleaning up mock $mock_conf"
> mock_root="${mock_conf_file##*/}"
> mock_root="${mock_root%.*}"
> my_mock="/usr/bin/mock"
> my_mock+=" --configdir=${mock_conf_file%/*}"
> my_mock+=" --root=${mock_root}"
> my_mock+=" --resultdir=$WORKSPACE"
>
> #TODO: investigate why mock --clean fails to umount certain dirs sometimes,
> #so we can use it instead of manually doing all this.
> echo "Killing all mock orphan processes, if any."
> $my_mock \
> --orphanskill \
> || {
> echo "ERROR: Failed to kill orphans on $chroot."
> failed=true
> }
>
> mock_root="$(\
> grep \
> -Po "(?<=config_opts\['root'\] = ')[^']*" \
> "$mock_conf_file" \
> )" || :
> [[ "$mock_root" ]] || continue
> mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
> if [[ "$mounts" ]]; then
> echo "Found mounted dirs inside the chroot $chroot. Trying to umount."
> fi
> for mount in "${mounts[@]}"; do
> sudo umount --lazy "$mount" \
> || {
> echo "ERROR: Failed to umount $mount."
> failed=true
> }
> done
> done
>
> # Clean any leftover chroot from other jobs
> for mock_root in /var/lib/mock/*; do
> this_chroot_failed=false
> mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
> if [[ "$mounts" ]]; then
> echo "Found mounted dirs inside the chroot $mock_root." \
> "Trying to umount."
> fi
> for mount in "${mounts[@]}"; do
> sudo umount --lazy "$mount" \
> || {
> echo "ERROR: Failed to umount $mount."
> failed=true
> this_chroot_failed=true
> }
> done
> if ! $this_chroot_failed; then
> sudo rm -rf "$mock_root"
> fi
> done
>
> if $failed; then
> echo "Aborting."
> exit 1
> fi
>
> # remove mock system cache, we will setup proxies to do the caching and this
> # takes lots of space between runs
> shopt -u nullglob
> sudo rm -Rf /var/cache/mock/*
>
> # restore the permissions in the working dir, as sometimes it leaves files
> # owned by root and then the 'cleanup workspace' from jenkins job fails to
> # clean and breaks the jobs
> sudo chown -R "$USER" "$WORKSPACE"
>
> [ovirt_3.6_system-tests] $ /bin/bash -xe /tmp/hudson1974597178250713835.sh
> + echo shell-scripts/mock_cleanup.sh
> shell-scripts/mock_cleanup.sh
> + shopt -s nullglob
> + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/>
> + cat
> _______________________________________________________________________
> #######################################################################
> # #
> # CLEANUP #
> # #
> #######################################################################
> + logs=(./*log ./*/logs)
> + [[ -n ./ovirt-system-tests/logs ]]
> + tar cvzf exported-artifacts/logs.tgz ./ovirt-system-tests/logs
> ./ovirt-system-tests/logs/
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/state.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/build.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/stdout_stderr.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/root.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/state.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/build.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/stdout_stderr.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/root.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.basic_suite_3.6.sh/basic_suite_3.6.sh.log
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/
> ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/stdout_stderr.log
> + rm -rf ./ovirt-system-tests/logs
> + failed=false
> + mock_confs=("$WORKSPACE"/*/mocker*)
> + for mock_conf_file in '"${mock_confs[@]}"'
> + [[ -n <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...> ]]
> + echo 'Cleaning up mock '
> Cleaning up mock
> + mock_root=mocker-fedora-23-x86_64.fc23.cfg
> + mock_root=mocker-fedora-23-x86_64.fc23
> + my_mock=/usr/bin/mock
> + my_mock+=' --configdir=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests'>
> + my_mock+=' --root=mocker-fedora-23-x86_64.fc23'
> + my_mock+=' --resultdir=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/'>
> + echo 'Killing all mock orphan processes, if any.'
> Killing all mock orphan processes, if any.
> + /usr/bin/mock --configdir=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests> --root=mocker-fedora-23-x86_64.fc23 --resultdir=<http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/> --orphanskill
> WARNING: Could not find required logging config file: <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests....> Using default...
> INFO: mock.py version 1.2.17 starting (python version = 3.4.3)...
> Start: init plugins
> INFO: selinux enabled
> Finish: init plugins
> Start: run
> Finish: run
> ++ grep -Po '(?<=config_opts\['\''root'\''\] = '\'')[^'\'']*' <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/ovirt-system-tests...>
> + mock_root=fedora-23-x86_64-ecdfd9c866cbf1a2d41342199fdb011f
> + [[ -n fedora-23-x86_64-ecdfd9c866cbf1a2d41342199fdb011f ]]
> + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> ++ mount
> ++ grep fedora-23-x86_64-ecdfd9c866cbf1a2d41342199fdb011f
> ++ awk '{print $3}'
> + :
> + [[ -n '' ]]
> + false
> + shopt -u nullglob
> + sudo rm -Rf /var/cache/mock/fedora-23-x86_64-ecdfd9c866cbf1a2d41342199fdb011f
> + sudo chown -R jenkins <http://jenkins.ovirt.org/job/ovirt_3.6_system-tests/ws/>
> POST BUILD TASK : SUCCESS
> END OF POST BUILD TASK : 1
> Recording test results
> Archiving artifacts
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
8 years, 5 months
Re: [ovirt-devel] Storage in VDSM questions
by Liron Aravot
>
> ---------- Forwarded message ----------
> From: Shmuel Melamud <smelamud(a)redhat.com>
> Date: Wed, Jun 22, 2016 at 4:51 PM
> Subject: [ovirt-devel] Storage in VDSM questions
> To: devel <devel(a)ovirt.org>
>
>
> Hi!
>
> I have a bunch of questions related to storage in VDSM. I would be happy
> if anybody familiar with the subject can help me with them.
>
> 1. What is HSM? IRS?
>
IRS = SPM
HSM=non SPM host
> 2. What is SDM?
>
SDM= Storage Domain Management
A feature currently under development that provides alternate method to SPM
to manage shared storage.
> 3. What hsm.spmSchedule() method does?
>
Schedules a task to be executed as SPM (used to perform operation that alter
the shared storage or data operations currently).
> 4. What does domainMonitor.getHostId(vol_info.sd_id) method return?
>
I'd recomment to do some SANLock reading (see below).
Basically, its the id the host uses to acquire locks on the shared storage.
http://linux.die.net/man/8/sanlock
http://old.ovirt.org/SANLock
>
> Storage domain
>
> 5. What is StorageDomainManifest? Why there are separate StorageDomain and
> StorageDomainManifest classes? (And the same with Volume and
> VolumeManifest.)
>
Those are changes made in order to share "SPM" related code with "SDM"
related code (as not all of the code is relevant for both).
alitke/nsoffer could share more information on the decisions behind adding
those.
> 6. What does sd_manifest.domain_lock(host_id) guard? Why do we need the
> host_id parameter?
>
see answer to question 4.
> 7. Why in HSM we are using vars.task.getSharedLock(STORAGE, sdUUID)
> sometimes instead of domain lock?
>
The task lock (which uses the resource manager) is used to acquire in
memory logical lock within the current host.
Theoretical example- you wouldn't want to attempt to delete a domain while
you create a volume on it.
8. What does sd_manifest.get_volume_artifacts(img_id, vol_id) return?
>
> Resource manager
>
> 9. What is ResourceManager?
>
Resource manager is basically a in memory lock manager in vdsm.
> 10. What rmanager.acquireResource(image_res_ns, img_id, lockType) does?
> What types of resources it may be used with?
>
Well, the best answer would be the simple one - its used to acquire a
resource, you can specify a namespace and id, the lock will be taken on
namespace_ID
so basically you can acquire lock on the same id under different namespaces.
>
> Images and volumes
>
> 11. What methods dom.linkBCImage(imgPath, imgUUID) and
> dom.unlinkBCImage(imgUUID) do?
>
Creates/Removes a symlink to a given image.
> 12. What methods dom.activateVolumes(imgUUID, imgVolumes) and
> dom.deactivateImage(imgUUID) do?
>
Can you be a bit more specific in the question? I don't have much to add
over the method names.
>
>
13. What method dom.getVolumeLease(imgUUID, volUUID) does?
>
see related SANLock question before.
> 14. What methods volume.prepare() and volume.teardown() do?
>
> Thanks in advance for your answers!
>
> Shmuel
>
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
>
8 years, 5 months
Jenkins.ovirt.org outage
by Eyal Edri
Following on previous message, we are still suffering issues on
jenkins.ovirt.org,
And some jobs are failing on network connectivity.
We'll update when the problem is resolved, though might be resolved only on
Sunday
due to unavailability of most people during the weekend.
--
Eyal Edri
Associate Manager
RHEV DevOps
EMEA ENG Virtualization R&D
Red Hat Israel
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
8 years, 6 months
Node updates broken
by Fabian Deutsch
Hey,
the node updates on 4.0, 4.0-pre, and 4.0-snapshot are currently broken
because of two things:
1. We need an ovirt-release release in 4.0 to pull in some node fixes
2. We need regular updates of 4.0-snapshot to build the updates
- fabian
--
Fabian Deutsch <fdeutsch(a)redhat.com>
RHEV Hypervisor
Red Hat
8 years, 6 months