
On Tue, Mar 05, 2013 at 02:30:05PM +0800, Zhou Zheng Sheng wrote:
Hi all,
Recently I get a oVirt Jenkins power user account. Firstly, on my own machine, I have setup Jenkins in Fedora 18 to run VDSM functional tests, then now I would like to configure the oVirt Jenkins to run those tests.
VDSM is for managing the hosts so the tests need some root privilege. The system have to be properly configured to provide some dependencies. This can not be done solely in Jenkins. I would like to list these dependencies as follow and see if they can be setup with the help of the server admin.
Dependency packages for building VDSM:
yum -y install git autoconf automake gcc python-devel python-nose libvirt libvirt-python python-pthreading m2crypto python-pep8 pyflakes rpm-build python-rtslib
Some configuration for the environment:
systemctl stop ksmtuned.service # mom tests are conflicting with ksmtuned systemctl disable ksmtuned.service chmod a+r /boot/initramfs* mkdir /rhev restorecon -v -R /rhev yum -y downgrade pyflakes
visudo:
jenkins ALL= NOPASSWD: /usr/bin/make install, /usr/bin/yum, /usr/bin/systemctl, /usr/share/vdsm/tests/run_tests.sh jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/builder jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/rpmbuild jenkins ALL= NOPASSWD: /usr/bin/rm -f /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/nosetests.xml
On my machine, the Jenkins is run as user "jenkins". Its home is /home/jenkins/.jenkins and the job name is vdsmFunctionalTest. We have to change the sudo configuration to fit into the server.
To run glusterfs storage domain related test cases, selinux must be turned to permissive mode, because there are some violations in the latest glusterfs. If we can not give up selinux for security considerations, I will skip those tests when configuring job in Jenkins. The following gluster configuration is from Deepak and it works on my machine. Turn off selinux:
vim /etc/selinux/config: SELINUX=permissive setenforce 0
Install glusterfs and setup the brick:
wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/Fed... yum -y install glusterfs systemctl start glusterd.service mkdir /var/lib/vdsm/myglusterbrick chmod 777 /var/lib/vdsm/myglusterbrick
Now start the gluster shell and issue the following commands:
gluster gluster> volume create testvol 192.168.X.X:/var/lib/vdsm/myglusterbrick gluster> volume start testvol gluster> volume set testvol server.allow-insecure on
vim /etc/glusterfs/glusterd.vol and enable the below option:
option rpc-auth-allow-insecure on
So glusterd.vol should look somethign like this...
volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option rpc-auth-allow-insecure on end-volume
If it's successful, we are able to see the glusterfsd process that owns the brick:
ps -ef| grep testvol root 2551 1 0 23:16 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.X.X.var-lib-vdsm-myglusterbrick -p /var/lib/glusterd/vols/testvol/run/192.168.X.X-var-lib-vdsm-myglusterbrick.pid -S /var/run/d5dd385ecebfdfc05ef54fa0b4d28960.socket --brick-name /var/lib/vdsm/myglusterbrick -l /var/log/glusterfs/bricks/var-lib-vdsm-myglusterbrick.log --xlator-option *-posix.glusterd-uuid=11bd6f47-a3ff-4969-a06b-e91e0f91a0e8 --brick-port 49152 --xlator-option testvol-server.listen-port=49152
I will run the following shell script as a build step in the Jenkins job. Though the script works fine in my Jenkins on Fedora 18, please give comments if you spot problems when it runs in oVirt Jenkins.
set -e set -v
# BUILD # # Make things clean. sudo -n rm -f "$(pwd)/nosetests.xml"
BUILDERDIR="$(pwd)/builder" RPMTOPDIR="$(pwd)/rpmbuild"
sudo -n rm -rf "$BUILDERDIR" sudo -n rm -rf "$RPMTOPDIR"
test -f Makefile && make -k distclean || : find . -name '*.pyc' | xargs rm -f find . -name '*.pyo' | xargs rm -f
./autogen.sh --prefix="$BUILDERDIR"
# If the MAKEFLAGS envvar does not yet include a -j option, # add -jN where N depends on the number of processors. case $MAKEFLAGS in
You know what I feel about unquoted shell variables..
*-j*) ;; *) n=$(getconf _NPROCESSORS_ONLN 2> /dev/null) test "$n" -gt 0 || n=1 n=$(expr $n + 1) MAKEFLAGS="$MAKEFLAGS -j$n" export MAKEFLAGS ;; esac
make sudo -n make install
rm -f *.tar.gz make dist
if [ -n "$AUTOBUILD_COUNTER" ]; then EXTRA_RELEASE=".auto$AUTOBUILD_COUNTER" else NOW=`date +"%s"` EXTRA_RELEASE=".$USER$NOW" fi
NOSE_EXCLUDE=.* rpmbuild --nodeps \ --define "extra_release $EXTRA_RELEASE" \ --define "_sourcedir `pwd`" \ --define "_topdir $RPMTOPDIR" \ -ba --clean vdsm.spec
# INSTALL # joinlines() { local lines="$1" local line="" local sep="$2" local joined="" for line in "$lines"; do joined="${joined}${sep}${line}" done printf "$joined" }
( cd "$RPMTOPDIR" packages=$(find . -name "*.rpm" | grep -v "\.src\.rpm") sudo -n yum -y remove "vdsm*" sudo -n yum -y localinstall $(joinlines "$packages" " ") )
is "joinlines"" really needed? I think $() does this on its own.
# START SERVICE # sudo -n systemctl start vdsmd.service sleep 20 sudo -n systemctl status vdsmd.service
# RUN TESTS # OLDDIR="$(pwd)" ( cd /usr/share/vdsm/tests sudo -n ./run_tests.sh --with-xunit --xunit-file "$OLDDIR/nosetests.xml" functional/*.py )
# STOP SERVICE # sudo -n systemctl stop vdsmd.service
The above setup may not be acceptable for security concerns. So I have another setup plan.
yes, this is quite frightening. `sudo make install` is just begging for an exploit by a lazy attacker.
I setup a virtual machine running in QEMU snapshot mode. Then add it to Jenkins as a slave. There is a plugin for Jenkins to start and stop slave using libvirt. So I configure Jenkins to start the VM slave to build, install and run VDSM tests, then shutdown. Jenkins slave gets the root privileges of the guest OS and do whatever it needs to. Since the VM slave is in snapshot mode, it restores the original state after shutdown. I also make a small script to switch snapshot mode on/off when I needs to manage the configuration and packages for the guest OS.
This approach would make it easier to clean after a botched test (say, a vdsm version with a broken /etc/sudoers.d/vdsm, which renders "sudo" useless).
Which plan do you prefer? Could someone help me setup this environment? (I only get access to Jenkins.)
I vote for the virtualized approach.