Configuring Fedora 18 to run VDSM functional tests

Hi all, Recently I get a oVirt Jenkins power user account. Firstly, on my own machine, I have setup Jenkins in Fedora 18 to run VDSM functional tests, then now I would like to configure the oVirt Jenkins to run those tests. VDSM is for managing the hosts so the tests need some root privilege. The system have to be properly configured to provide some dependencies. This can not be done solely in Jenkins. I would like to list these dependencies as follow and see if they can be setup with the help of the server admin. Dependency packages for building VDSM: yum -y install git autoconf automake gcc python-devel python-nose libvirt libvirt-python python-pthreading m2crypto python-pep8 pyflakes rpm-build python-rtslib Some configuration for the environment: systemctl stop ksmtuned.service # mom tests are conflicting with ksmtuned systemctl disable ksmtuned.service chmod a+r /boot/initramfs* mkdir /rhev restorecon -v -R /rhev yum -y downgrade pyflakes visudo: jenkins ALL= NOPASSWD: /usr/bin/make install, /usr/bin/yum, /usr/bin/systemctl, /usr/share/vdsm/tests/run_tests.sh jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/builder jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/rpmbuild jenkins ALL= NOPASSWD: /usr/bin/rm -f /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/nosetests.xml On my machine, the Jenkins is run as user "jenkins". Its home is /home/jenkins/.jenkins and the job name is vdsmFunctionalTest. We have to change the sudo configuration to fit into the server. To run glusterfs storage domain related test cases, selinux must be turned to permissive mode, because there are some violations in the latest glusterfs. If we can not give up selinux for security considerations, I will skip those tests when configuring job in Jenkins. The following gluster configuration is from Deepak and it works on my machine. Turn off selinux: vim /etc/selinux/config: SELINUX=permissive setenforce 0 Install glusterfs and setup the brick: wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/Fed... yum -y install glusterfs systemctl start glusterd.service mkdir /var/lib/vdsm/myglusterbrick chmod 777 /var/lib/vdsm/myglusterbrick Now start the gluster shell and issue the following commands: gluster gluster> volume create testvol 192.168.X.X:/var/lib/vdsm/myglusterbrick gluster> volume start testvol gluster> volume set testvol server.allow-insecure on vim /etc/glusterfs/glusterd.vol and enable the below option: option rpc-auth-allow-insecure on So glusterd.vol should look somethign like this... volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option rpc-auth-allow-insecure on end-volume If it's successful, we are able to see the glusterfsd process that owns the brick: ps -ef| grep testvol root 2551 1 0 23:16 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.X.X.var-lib-vdsm-myglusterbrick -p /var/lib/glusterd/vols/testvol/run/192.168.X.X-var-lib-vdsm-myglusterbrick.pid -S /var/run/d5dd385ecebfdfc05ef54fa0b4d28960.socket --brick-name /var/lib/vdsm/myglusterbrick -l /var/log/glusterfs/bricks/var-lib-vdsm-myglusterbrick.log --xlator-option *-posix.glusterd-uuid=11bd6f47-a3ff-4969-a06b-e91e0f91a0e8 --brick-port 49152 --xlator-option testvol-server.listen-port=49152 I will run the following shell script as a build step in the Jenkins job. Though the script works fine in my Jenkins on Fedora 18, please give comments if you spot problems when it runs in oVirt Jenkins. set -e set -v # BUILD # # Make things clean. sudo -n rm -f "$(pwd)/nosetests.xml" BUILDERDIR="$(pwd)/builder" RPMTOPDIR="$(pwd)/rpmbuild" sudo -n rm -rf "$BUILDERDIR" sudo -n rm -rf "$RPMTOPDIR" test -f Makefile && make -k distclean || : find . -name '*.pyc' | xargs rm -f find . -name '*.pyo' | xargs rm -f ./autogen.sh --prefix="$BUILDERDIR" # If the MAKEFLAGS envvar does not yet include a -j option, # add -jN where N depends on the number of processors. case $MAKEFLAGS in *-j*) ;; *) n=$(getconf _NPROCESSORS_ONLN 2> /dev/null) test "$n" -gt 0 || n=1 n=$(expr $n + 1) MAKEFLAGS="$MAKEFLAGS -j$n" export MAKEFLAGS ;; esac make sudo -n make install rm -f *.tar.gz make dist if [ -n "$AUTOBUILD_COUNTER" ]; then EXTRA_RELEASE=".auto$AUTOBUILD_COUNTER" else NOW=`date +"%s"` EXTRA_RELEASE=".$USER$NOW" fi NOSE_EXCLUDE=.* rpmbuild --nodeps \ --define "extra_release $EXTRA_RELEASE" \ --define "_sourcedir `pwd`" \ --define "_topdir $RPMTOPDIR" \ -ba --clean vdsm.spec # INSTALL # joinlines() { local lines="$1" local line="" local sep="$2" local joined="" for line in "$lines"; do joined="${joined}${sep}${line}" done printf "$joined" } ( cd "$RPMTOPDIR" packages=$(find . -name "*.rpm" | grep -v "\.src\.rpm") sudo -n yum -y remove "vdsm*" sudo -n yum -y localinstall $(joinlines "$packages" " ") ) # START SERVICE # sudo -n systemctl start vdsmd.service sleep 20 sudo -n systemctl status vdsmd.service # RUN TESTS # OLDDIR="$(pwd)" ( cd /usr/share/vdsm/tests sudo -n ./run_tests.sh --with-xunit --xunit-file "$OLDDIR/nosetests.xml" functional/*.py ) # STOP SERVICE # sudo -n systemctl stop vdsmd.service The above setup may not be acceptable for security concerns. So I have another setup plan. I setup a virtual machine running in QEMU snapshot mode. Then add it to Jenkins as a slave. There is a plugin for Jenkins to start and stop slave using libvirt. So I configure Jenkins to start the VM slave to build, install and run VDSM tests, then shutdown. Jenkins slave gets the root privileges of the guest OS and do whatever it needs to. Since the VM slave is in snapshot mode, it restores the original state after shutdown. I also make a small script to switch snapshot mode on/off when I needs to manage the configuration and packages for the guest OS. Which plan do you prefer? Could someone help me setup this environment? (I only get access to Jenkins.) -- Thanks and best regards! Zhou Zheng Sheng / 周征晟 E-mail: zhshzhou@linux.vnet.ibm.com Telephone: 86-10-82454397

--=-Fb2J9odrjqQekPu65UKm Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Am Dienstag, den 05.03.2013, 14:30 +0800 schrieb Zhou Zheng Sheng:
I setup a virtual machine running in QEMU snapshot mode. Then add it to Jenkins as a slave. There is a plugin for Jenkins to start and stop slave using libvirt. So I configure Jenkins to start the VM slave to build, install and run VDSM tests, then shutdown. Jenkins slave gets the root privileges of the guest OS and do whatever it needs to. Since the VM slave is in snapshot mode, it restores the original state after shutdown. I also make a small script to switch snapshot mode on/off when I needs to manage the configuration and packages for the guest OS. =20 Which plan do you prefer? Could someone help me setup this environment? (I only get access to Jenkins.)=20
Hey, with oVirt Node we've got similar problems of how to do the functional tests. That's why we came up with igor [0]. Igord integrates nicely with Jenkins and does all the VM management for you. What igor does is creating and setting up an OS (by passing kernelarguments - currently only oVirt Node is tested) and running a testsuite on the OS itself - so maybe this is interesting for you too! You could even run vdsm on a Node build (a Node ISO is build is triggered by every vdsm fedora package build). Currently Igor uses libvirt to run the VMs and requires cobbler to setup the VMs, but we might drop this dependency.=20 This document describes how to setup an Igor env on Fedora 18 [1]. Besides that we are also looking how we can integrate Igor into the oVirt infrastructure. Greetings fabian -- [0] https://gitorious.org/ovirt/igord [1] https://etherpad.mozilla.org/VXrJkQBNhq --=-Fb2J9odrjqQekPu65UKm Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.13 (GNU/Linux) iQIcBAABAgAGBQJRNcbuAAoJEC9+uOgSHVGUDpEQAIrwf3g9WnAYB42XY+LtTbj5 swbk3+oc/SzNndz16A3rfFBGzdcs1iv4hVb14fGucJiRAFbbc3jLUwLFvt8B0RRi Ora95JG0qD7eOcwzFJKkC+Q0qa9nJVzomn/LlWqq0HcTgPYC7uny2xrzkSGUoDWP YTr+fdk4vi+D5Da1ck9HuqWPwHzAuQwVQZIVhg0aurUQHw1/CBFKOxMdMaAsKBuE 7XfyFNooLIsilyvkjvRe6ftRMSGVrteI5wAYBh8aYPWDi0PxBpYxrWEK76b/o9DL RnxDIh/BmSb0EuFCkzWty7ScSw+AcfxxQj5xdZRWO9B4z5c4SllCW30yqXgwagyU LaEipCMGpAu9vXiWdQtZCYMkHMKBh8N/hVNg+xE+fNvcunT+9fUuQkPX+Fz8lQzf Z9Zln5gCTctgE0MndhVabSY4PJZKdk/8udE5AAZIpr+DyhFUZ1b9/5NZkGhcTXVg xiYvnaBqpes5Zp2fjrhqylCyqI2l0dVLBRM0/BfKGx+ZSF9S5gxnHHsMERS0lnrR OPbw5YIvDRaxm4A+BYJ3cDyt9O19D7KmpGfG6BSurNUASMRhH37axPCsZCJS/AH9 awnEHjSfkpg61S01A6vsQoogKcqy4OVoabLzkQ150tXwqtAAex8AWPBCij3/wF66 zTIv9K0H46FPbBTOb0My =LCji -----END PGP SIGNATURE----- --=-Fb2J9odrjqQekPu65UKm--

Thanks Fabian. I have a look at igor. It's very powerful. From what I understand, Jenkins invokes igor client to upload the test plan to igord. igord starts VMs and let cobller setup the guest OS, then the igor guest agent inside guest OS download the test from igord and run them. The igor guest agent will upload the test result after each test step. The communication is through HTTP. igord implements a REST-like web interface. Extra parameters can be passed to the guest agent by appending the kernel arguments of the guest OS. If I want to setup several VMs and run a cluster test plan on them, igor will be very helpful. I think igor is a little too heavy for the current VDSM functional tests. For now we just run the tests on single node. I think it's easier to do this using the libvirt-slave plugin for Jenkins. It enables Jenkins start VM as a slave and shutdown after use. The slave can be put behind the NAT using the default libvirt network. I have already configured this environment and test it in my lab. It seems good enough for this task. We have some proposals for a cluster test plan in our team. I will learn more about igor and see if we can leverage it. It will be very useful if igor is integrated to the oVirt infrastructure. We can re-use this service to more high level tests. on 03/05/2013 18:20, Fabian Deutsch wrote:
Am Dienstag, den 05.03.2013, 14:30 +0800 schrieb Zhou Zheng Sheng:
I setup a virtual machine running in QEMU snapshot mode. Then add it to Jenkins as a slave. There is a plugin for Jenkins to start and stop slave using libvirt. So I configure Jenkins to start the VM slave to build, install and run VDSM tests, then shutdown. Jenkins slave gets the root privileges of the guest OS and do whatever it needs to. Since the VM slave is in snapshot mode, it restores the original state after shutdown. I also make a small script to switch snapshot mode on/off when I needs to manage the configuration and packages for the guest OS.
Which plan do you prefer? Could someone help me setup this environment? (I only get access to Jenkins.) Hey,
with oVirt Node we've got similar problems of how to do the functional tests. That's why we came up with igor [0]. Igord integrates nicely with Jenkins and does all the VM management for you. What igor does is creating and setting up an OS (by passing kernelarguments - currently only oVirt Node is tested) and running a testsuite on the OS itself - so maybe this is interesting for you too! You could even run vdsm on a Node build (a Node ISO is build is triggered by every vdsm fedora package build).
Currently Igor uses libvirt to run the VMs and requires cobbler to setup the VMs, but we might drop this dependency. This document describes how to setup an Igor env on Fedora 18 [1].
Besides that we are also looking how we can integrate Igor into the oVirt infrastructure.
Greetings fabian
-- [0] https://gitorious.org/ovirt/igord [1] https://etherpad.mozilla.org/VXrJkQBNhq
-- Thanks and best regards! Zhou Zheng Sheng / 周征晟 E-mail: zhshzhou@linux.vnet.ibm.com Telephone: 86-10-82454397

On Tue, Mar 05, 2013 at 02:30:05PM +0800, Zhou Zheng Sheng wrote:
Hi all,
Recently I get a oVirt Jenkins power user account. Firstly, on my own machine, I have setup Jenkins in Fedora 18 to run VDSM functional tests, then now I would like to configure the oVirt Jenkins to run those tests.
VDSM is for managing the hosts so the tests need some root privilege. The system have to be properly configured to provide some dependencies. This can not be done solely in Jenkins. I would like to list these dependencies as follow and see if they can be setup with the help of the server admin.
Dependency packages for building VDSM:
yum -y install git autoconf automake gcc python-devel python-nose libvirt libvirt-python python-pthreading m2crypto python-pep8 pyflakes rpm-build python-rtslib
Some configuration for the environment:
systemctl stop ksmtuned.service # mom tests are conflicting with ksmtuned systemctl disable ksmtuned.service chmod a+r /boot/initramfs* mkdir /rhev restorecon -v -R /rhev yum -y downgrade pyflakes
visudo:
jenkins ALL= NOPASSWD: /usr/bin/make install, /usr/bin/yum, /usr/bin/systemctl, /usr/share/vdsm/tests/run_tests.sh jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/builder jenkins ALL= NOPASSWD: /usr/bin/rm -rf /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/rpmbuild jenkins ALL= NOPASSWD: /usr/bin/rm -f /home/jenkins/.jenkins/jobs/vdsmFunctionalTest/workspace/nosetests.xml
On my machine, the Jenkins is run as user "jenkins". Its home is /home/jenkins/.jenkins and the job name is vdsmFunctionalTest. We have to change the sudo configuration to fit into the server.
To run glusterfs storage domain related test cases, selinux must be turned to permissive mode, because there are some violations in the latest glusterfs. If we can not give up selinux for security considerations, I will skip those tests when configuring job in Jenkins. The following gluster configuration is from Deepak and it works on my machine. Turn off selinux:
vim /etc/selinux/config: SELINUX=permissive setenforce 0
Install glusterfs and setup the brick:
wget http://download.gluster.org/pub/gluster/glusterfs/qa-releases/3.4.0alpha/Fed... yum -y install glusterfs systemctl start glusterd.service mkdir /var/lib/vdsm/myglusterbrick chmod 777 /var/lib/vdsm/myglusterbrick
Now start the gluster shell and issue the following commands:
gluster gluster> volume create testvol 192.168.X.X:/var/lib/vdsm/myglusterbrick gluster> volume start testvol gluster> volume set testvol server.allow-insecure on
vim /etc/glusterfs/glusterd.vol and enable the below option:
option rpc-auth-allow-insecure on
So glusterd.vol should look somethign like this...
volume management type mgmt/glusterd option working-directory /var/lib/glusterd option transport-type socket,rdma option transport.socket.keepalive-time 10 option transport.socket.keepalive-interval 2 option rpc-auth-allow-insecure on end-volume
If it's successful, we are able to see the glusterfsd process that owns the brick:
ps -ef| grep testvol root 2551 1 0 23:16 ? 00:00:00 /usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.X.X.var-lib-vdsm-myglusterbrick -p /var/lib/glusterd/vols/testvol/run/192.168.X.X-var-lib-vdsm-myglusterbrick.pid -S /var/run/d5dd385ecebfdfc05ef54fa0b4d28960.socket --brick-name /var/lib/vdsm/myglusterbrick -l /var/log/glusterfs/bricks/var-lib-vdsm-myglusterbrick.log --xlator-option *-posix.glusterd-uuid=11bd6f47-a3ff-4969-a06b-e91e0f91a0e8 --brick-port 49152 --xlator-option testvol-server.listen-port=49152
I will run the following shell script as a build step in the Jenkins job. Though the script works fine in my Jenkins on Fedora 18, please give comments if you spot problems when it runs in oVirt Jenkins.
set -e set -v
# BUILD # # Make things clean. sudo -n rm -f "$(pwd)/nosetests.xml"
BUILDERDIR="$(pwd)/builder" RPMTOPDIR="$(pwd)/rpmbuild"
sudo -n rm -rf "$BUILDERDIR" sudo -n rm -rf "$RPMTOPDIR"
test -f Makefile && make -k distclean || : find . -name '*.pyc' | xargs rm -f find . -name '*.pyo' | xargs rm -f
./autogen.sh --prefix="$BUILDERDIR"
# If the MAKEFLAGS envvar does not yet include a -j option, # add -jN where N depends on the number of processors. case $MAKEFLAGS in
You know what I feel about unquoted shell variables..
*-j*) ;; *) n=$(getconf _NPROCESSORS_ONLN 2> /dev/null) test "$n" -gt 0 || n=1 n=$(expr $n + 1) MAKEFLAGS="$MAKEFLAGS -j$n" export MAKEFLAGS ;; esac
make sudo -n make install
rm -f *.tar.gz make dist
if [ -n "$AUTOBUILD_COUNTER" ]; then EXTRA_RELEASE=".auto$AUTOBUILD_COUNTER" else NOW=`date +"%s"` EXTRA_RELEASE=".$USER$NOW" fi
NOSE_EXCLUDE=.* rpmbuild --nodeps \ --define "extra_release $EXTRA_RELEASE" \ --define "_sourcedir `pwd`" \ --define "_topdir $RPMTOPDIR" \ -ba --clean vdsm.spec
# INSTALL # joinlines() { local lines="$1" local line="" local sep="$2" local joined="" for line in "$lines"; do joined="${joined}${sep}${line}" done printf "$joined" }
( cd "$RPMTOPDIR" packages=$(find . -name "*.rpm" | grep -v "\.src\.rpm") sudo -n yum -y remove "vdsm*" sudo -n yum -y localinstall $(joinlines "$packages" " ") )
is "joinlines"" really needed? I think $() does this on its own.
# START SERVICE # sudo -n systemctl start vdsmd.service sleep 20 sudo -n systemctl status vdsmd.service
# RUN TESTS # OLDDIR="$(pwd)" ( cd /usr/share/vdsm/tests sudo -n ./run_tests.sh --with-xunit --xunit-file "$OLDDIR/nosetests.xml" functional/*.py )
# STOP SERVICE # sudo -n systemctl stop vdsmd.service
The above setup may not be acceptable for security concerns. So I have another setup plan.
yes, this is quite frightening. `sudo make install` is just begging for an exploit by a lazy attacker.
I setup a virtual machine running in QEMU snapshot mode. Then add it to Jenkins as a slave. There is a plugin for Jenkins to start and stop slave using libvirt. So I configure Jenkins to start the VM slave to build, install and run VDSM tests, then shutdown. Jenkins slave gets the root privileges of the guest OS and do whatever it needs to. Since the VM slave is in snapshot mode, it restores the original state after shutdown. I also make a small script to switch snapshot mode on/off when I needs to manage the configuration and packages for the guest OS.
This approach would make it easier to clean after a botched test (say, a vdsm version with a broken /etc/sudoers.d/vdsm, which renders "sudo" useless).
Which plan do you prefer? Could someone help me setup this environment? (I only get access to Jenkins.)
I vote for the virtualized approach.
participants (3)
-
Dan Kenigsberg
-
Fabian Deutsch
-
Zhou Zheng Sheng