Gerrit emails - Returned mail: see transcript for details
by Martin Sivak
Hi,
there might be something bad in our SPF config. Please check.
Martin
---------- Forwarded message ----------
From: Mail Delivery Subsystem <MAILER-DAEMON(a)gerrit.ovirt.org>
Date: Mon, Oct 31, 2016 at 2:05 PM
Subject: Returned mail: see transcript for details
To: msivak(a)redhat.com
The original message was received at Mon, 31 Oct 2016 09:05:24 -0400
from gerrit.ovirt.org [127.0.0.1]
----- The following addresses had permanent fatal errors -----
<laravot(a)redhat.com>
…
[View More] (reason: 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69],
'redhat.com'; REJECT)
<akrejcir(a)redhat.com>
(reason: 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69],
'redhat.com'; REJECT)
<ahadas(a)redhat.com>
(reason: 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69],
'redhat.com'; REJECT)
----- Transcript of session follows -----
... while talking to mx1.redhat.com.:
>>> DATA
<<< 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
550 5.1.1 <ahadas(a)redhat.com>... User unknown
<<< 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
550 5.1.1 <akrejcir(a)redhat.com>... User unknown
<<< 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
550 5.1.1 <laravot(a)redhat.com>... User unknown
<<< 554 5.5.1 Error: no valid recipients
Final-Recipient: RFC822; laravot(a)redhat.com
Action: failed
Status: 5.7.1
Remote-MTA: DNS; mx1.redhat.com
Diagnostic-Code: SMTP; 551 5.7.1 SPF fail:
'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
Last-Attempt-Date: Mon, 31 Oct 2016 09:05:25 -0400
Final-Recipient: RFC822; akrejcir(a)redhat.com
Action: failed
Status: 5.7.1
Remote-MTA: DNS; mx1.redhat.com
Diagnostic-Code: SMTP; 551 5.7.1 SPF fail:
'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
Last-Attempt-Date: Mon, 31 Oct 2016 09:05:25 -0400
Final-Recipient: RFC822; ahadas(a)redhat.com
Action: failed
Status: 5.7.1
Remote-MTA: DNS; mx1.redhat.com
Diagnostic-Code: SMTP; 551 5.7.1 SPF fail:
'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
Last-Attempt-Date: Mon, 31 Oct 2016 09:05:25 -0400
---------- Forwarded message ----------
From: msivak(a)redhat.com
To: Arik Hadas <ahadas(a)redhat.com>
Cc: Andrej Krejcir <akrejcir(a)redhat.com>, Liron Aravot <laravot(a)redhat.com>
Date: Mon, 31 Oct 2016 09:05:24 -0400
Subject: Change in ovirt-engine[master]: core: Update OVF in storage
domains for HE VM edit
Hello Arik Hadas,
I'd like you to do a code review. Please visit
https://gerrit.ovirt.org/65085
to review the following change.
Change subject: core: Update OVF in storage domains for HE VM edit
......................................................................
core: Update OVF in storage domains for HE VM edit
Updating hosted engine VM saves OVF to storage domain as well as
to the DB.
This patch is complementary to: https://gerrit.ovirt.org/#/c/51842
Change-Id: Ib02d41136458677399428c62c8e475b4cb2fcb79
Bug-Url: https://bugzilla.redhat.com/1372000
Signed-off-by: Andrej Krejcir <akrejcir(a)redhat.com>
---
M backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/UpdateVmCommand.java
M backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/ovfstore/OvfDataUpdater.java
2 files changed, 22 insertions(+), 22 deletions(-)
git pull ssh://gerrit.ovirt.org:29418/ovirt-engine refs/changes/85/65085/3
diff --git a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/UpdateVmCommand.java
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/UpdateVmCommand.java
index 3c0bada..9c64003 100644
--- a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/UpdateVmCommand.java
+++ b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/UpdateVmCommand.java
@@ -22,6 +22,7 @@
import org.ovirt.engine.core.bll.quota.QuotaConsumptionParameter;
import org.ovirt.engine.core.bll.quota.QuotaSanityParameter;
import org.ovirt.engine.core.bll.quota.QuotaVdsDependent;
+import org.ovirt.engine.core.bll.storage.ovfstore.OvfDataUpdater;
import org.ovirt.engine.core.bll.utils.IconUtils;
import org.ovirt.engine.core.bll.utils.PermissionSubject;
import org.ovirt.engine.core.bll.validator.IconValidator;
@@ -37,7 +38,6 @@
import org.ovirt.engine.core.common.action.LockProperties;
import org.ovirt.engine.core.common.action.LockProperties.Scope;
import org.ovirt.engine.core.common.action.PlugAction;
-import org.ovirt.engine.core.common.action.ProcessOvfUpdateForStoragePoolParameters;
import org.ovirt.engine.core.common.action.RngDeviceParameters;
import org.ovirt.engine.core.common.action.UpdateVmVersionParameters;
import org.ovirt.engine.core.common.action.VdcActionType;
@@ -178,7 +178,17 @@
// Trigger OVF update for hosted engine VM only
if (getVm().isHostedEngine()) {
- registerRollbackHandler(new HostedEngineEditNotifier(getVm()));
+ registerRollbackHandler(new TransactionCompletionListener() {
+ @Override
+ public void onSuccess() {
+ OvfDataUpdater.getInstance().triggerNow();
+ }
+
+ @Override
+ public void onRollback() {
+ // No notification is needed
+ }
+ });
}
// save user selected value for hotplug before overriding
with db values (when updating running vm)
@@ -1150,25 +1160,6 @@
public VmValidator createVmValidator(VM vm) {
return new VmValidator(vm);
- }
-
- private static class HostedEngineEditNotifier implements
TransactionCompletionListener {
- final VM vm;
-
- public HostedEngineEditNotifier(VM vm) {
- this.vm = vm;
- }
-
- @Override
- public void onSuccess() {
- Backend.getInstance().runInternalAction(VdcActionType.ProcessOvfUpdateForStoragePool,
- new
ProcessOvfUpdateForStoragePoolParameters(vm.getStoragePoolId()));
- }
-
- @Override
- public void onRollback() {
- // No notification is needed
- }
}
protected InClusterUpgradeValidator getClusterUpgradeValidator() {
diff --git a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/ovfstore/OvfDataUpdater.java
b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/ovfstore/OvfDataUpdater.java
index 6dc3881..788e046 100644
--- a/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/ovfstore/OvfDataUpdater.java
+++ b/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/storage/ovfstore/OvfDataUpdater.java
@@ -27,6 +27,8 @@
private static final Logger log =
LoggerFactory.getLogger(OvfDataUpdater.class);
private static final OvfDataUpdater INSTANCE = new OvfDataUpdater();
+ private volatile String updateTimerJobId;
+
private OvfDataUpdater() {
}
@@ -40,7 +42,7 @@
public void initOvfDataUpdater() {
SchedulerUtil scheduler = Injector.get(SchedulerUtilQuartzImpl.class);
- scheduler.scheduleAFixedDelayJob(this, "ovfUpdateTimer", new
Class[] {},
+ updateTimerJobId = scheduler.scheduleAFixedDelayJob(this,
"ovfUpdateTimer", new Class[] {},
new Object[] {}, Config.<Integer>
getValue(ConfigValues.OvfUpdateIntervalInMinutes),
Config.<Integer>
getValue(ConfigValues.OvfUpdateIntervalInMinutes), TimeUnit.MINUTES);
log.info("Initialization of OvfDataUpdater completed successfully.");
@@ -84,4 +86,11 @@
}
}
}
+
+ public void triggerNow() {
+ if (updateTimerJobId != null) {
+ SchedulerUtil scheduler =
Injector.get(SchedulerUtilQuartzImpl.class);
+ scheduler.triggerJob(updateTimerJobId);
+ }
+ }
}
--
To view, visit https://gerrit.ovirt.org/65085
To unsubscribe, visit https://gerrit.ovirt.org/settings
Gerrit-MessageType: newchange
Gerrit-Change-Id: Ib02d41136458677399428c62c8e475b4cb2fcb79
Gerrit-PatchSet: 3
Gerrit-Project: ovirt-engine
Gerrit-Branch: master
Gerrit-Owner: Andrej Krejcir <akrejcir(a)redhat.com>
Gerrit-Reviewer: Andrej Krejcir <akrejcir(a)redhat.com>
Gerrit-Reviewer: Arik Hadas <ahadas(a)redhat.com>
Gerrit-Reviewer: Jenkins CI
Gerrit-Reviewer: Jenny Tokar <jtokar(a)redhat.com>
Gerrit-Reviewer: Liron Aravot <laravot(a)redhat.com>
Gerrit-Reviewer: Martin Sivák <msivak(a)redhat.com>
Gerrit-Reviewer: Phillip Bailey <phbailey(a)redhat.com>
Gerrit-Reviewer: Roman Mohr <rmohr(a)redhat.com>
Gerrit-Reviewer: Yanir Quinn <yquinn(a)redhat.com>
Gerrit-Reviewer: gerrit-hooks <automation(a)ovirt.org>
[View Less]
8 years, 2 months
[JIRA] (OVIRT-799) Gerrit emails - Returned mail: see transcript for details
by Evgheni Dereveanchin (oVirt JIRA)
Evgheni Dereveanchin created OVIRT-799:
------------------------------------------
Summary: Gerrit emails - Returned mail: see transcript for details
Key: OVIRT-799
URL: https://ovirt-jira.atlassian.net/browse/OVIRT-799
Project: oVirt - virtualization made easy
Issue Type: Bug
Reporter: Evgheni Dereveanchin
Assignee: infra
Mails coming from Gerrit are rejected by Red Hat due to the change in SPF …
[View More]policy enforcement:
<<< 551 5.7.1 SPF fail: 'gerrit.ovirt.org'[107.22.212.69], 'redhat.com'; REJECT
This likely happens due to the fact that Gerrit uses the authors emails in the From: field which is likely to be marked as SPAM. We should use a sender address in the ovirt.org domain for all mail sent by Gerrit to avoid this.
--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
[View Less]
8 years, 2 months
[JIRA] (OVIRT-776) Re: Build failed in Jenkins: ovirt_4.0_he-system-tests #402
by Evgheni Dereveanchin (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-776?page=com.atlassian.jira... ]
Evgheni Dereveanchin reassigned OVIRT-776:
------------------------------------------
Assignee: Evgheni Dereveanchin (was: infra)
> Re: Build failed in Jenkins: ovirt_4.0_he-system-tests #402
> -----------------------------------------------------------
>
> Key: OVIRT-776
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-776
> Project: oVirt -…
[View More] virtualization made easy
> Issue Type: By-EMAIL
> Reporter: sbonazzo
> Assignee: Evgheni Dereveanchin
>
> Looks like a network issue, scp fails with a timeout on a socket.
> Can you please have a look?
> On Mon, Oct 17, 2016 at 10:22 PM, <jenkins(a)jenkins.phx.ovirt.org> wrote:
> > See <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/402/changes>
> >
> > Changes:
> >
> > [Gal Ben Haim] Fix check-patch to collect logs on failure
> >
> > ------------------------------------------
> > [...truncated 1897 lines...]
> > ## took 2809 seconds
> > ## rc = 1
> > ##########################################################
> > ##! ERROR vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
> > ##! Last 20 log enties: logs/mocker-fedora-23-x86_64.
> > fc23.he_basic_suite_4.0.sh/he_basic_suite_4.0.sh.log
> > ##!
> > + true
> > + env_cleanup
> > + echo '#########################'
> > #########################
> > + local res=0
> > + local uuid
> > + echo '======== Cleaning up'
> > ======== Cleaning up
> > + [[ -e <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/deployment-he_basic_suite_4.0> ]]
> > + echo '----------- Cleaning with lago'
> > ----------- Cleaning with lago
> > + lago --workdir <http://jenkins.ovirt.org/job/
> > ovirt_4.0_he-system-tests/ws/ovirt-system-tests/deployment-
> > he_basic_suite_4.0> destroy --yes --all-prefixes
> > + echo '----------- Cleaning with lago done'
> > ----------- Cleaning with lago done
> > + [[ 0 != \0 ]]
> > + echo '======== Cleanup done'
> > ======== Cleanup done
> > + exit 0
> > Took 2642 seconds
> > ===================================
> > ##!
> > ##! ERROR ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > ##!########################################################
> > ##########################################################
> > Build step 'Execute shell' marked build as failure
> > Performing Post build task...
> > Match found for :.* : True
> > Logical operation result is TRUE
> > Running script : #!/bin/bash -xe
> > echo 'shell_scripts/system_tests.collect_logs.sh'
> >
> > #
> > # Required jjb vars:
> > # version
> > #
> > VERSION=4.0
> > SUITE_TYPE=
> >
> > WORKSPACE="$PWD"
> > OVIRT_SUITE="$SUITE_TYPE_suite_$VERSION"
> > TESTS_LOGS="$WORKSPACE/ovirt-system-tests/exported-artifacts"
> >
> > rm -rf "$WORKSPACE/exported-artifacts"
> > mkdir -p "$WORKSPACE/exported-artifacts"
> >
> > if [[ -d "$TESTS_LOGS" ]]; then
> > mv "$TESTS_LOGS/"* "$WORKSPACE/exported-artifacts/"
> > fi
> >
> > [ovirt_4.0_he-system-tests] $ /bin/bash -xe /tmp/
> > hudson4967817357021364058.sh
> > + echo shell_scripts/system_tests.collect_logs.sh
> > shell_scripts/system_tests.collect_logs.sh
> > + VERSION=4.0
> > + SUITE_TYPE=
> > + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/>
> > + OVIRT_SUITE=4.0
> > + TESTS_LOGS=<http://jenkins.ovirt.org/job/ovirt_4.0_he-
> > system-tests/ws/ovirt-system-tests/exported-artifacts>
> > + rm -rf <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/402/
> > artifact/exported-artifacts>
> > + mkdir -p <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/402/
> > artifact/exported-artifacts>
> > + [[ -d <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/exported-artifacts> ]]
> > + mv <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/exported-artifacts/lago_logs> <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/exported-artifacts/nosetests-002_bootstrap.py.xml> <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/exported-artifacts/nosetests-004_basic_sanity.py.xml> <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/exported-artifacts/test_logs> <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/402/
> > artifact/exported-artifacts/>
> > POST BUILD TASK : SUCCESS
> > END OF POST BUILD TASK : 0
> > Match found for :.* : True
> > Logical operation result is TRUE
> > Running script : #!/bin/bash -xe
> > echo "shell-scripts/mock_cleanup.sh"
> >
> > shopt -s nullglob
> >
> >
> > WORKSPACE="$PWD"
> >
> > # Make clear this is the cleanup, helps reading the jenkins logs
> > cat <<EOC
> > _______________________________________________________________________
> > #######################################################################
> > # #
> > # CLEANUP #
> > # #
> > #######################################################################
> > EOC
> >
> >
> > # Archive the logs, we want them anyway
> > logs=(
> > ./*log
> > ./*/logs
> > )
> > if [[ "$logs" ]]; then
> > tar cvzf exported-artifacts/logs.tgz "${logs[@]}"
> > rm -rf "${logs[@]}"
> > fi
> >
> > # stop any processes running inside the chroot
> > failed=false
> > mock_confs=("$WORKSPACE"/*/mocker*)
> > # Clean current jobs mockroot if any
> > for mock_conf_file in "${mock_confs[@]}"; do
> > [[ "$mock_conf_file" ]] || continue
> > echo "Cleaning up mock $mock_conf"
> > mock_root="${mock_conf_file##*/}"
> > mock_root="${mock_root%.*}"
> > my_mock="/usr/bin/mock"
> > my_mock+=" --configdir=${mock_conf_file%/*}"
> > my_mock+=" --root=${mock_root}"
> > my_mock+=" --resultdir=$WORKSPACE"
> >
> > #TODO: investigate why mock --clean fails to umount certain dirs
> > sometimes,
> > #so we can use it instead of manually doing all this.
> > echo "Killing all mock orphan processes, if any."
> > $my_mock \
> > --orphanskill \
> > || {
> > echo "ERROR: Failed to kill orphans on $chroot."
> > failed=true
> > }
> >
> > mock_root="$(\
> > grep \
> > -Po "(?<=config_opts\['root'\] = ')[^']*" \
> > "$mock_conf_file" \
> > )" || :
> > [[ "$mock_root" ]] || continue
> > mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
> > if [[ "$mounts" ]]; then
> > echo "Found mounted dirs inside the chroot $chroot. Trying to
> > umount."
> > fi
> > for mount in "${mounts[@]}"; do
> > sudo umount --lazy "$mount" \
> > || {
> > echo "ERROR: Failed to umount $mount."
> > failed=true
> > }
> > done
> > done
> >
> > # Clean any leftover chroot from other jobs
> > for mock_root in /var/lib/mock/*; do
> > this_chroot_failed=false
> > mounts=($(mount | awk '{print $3}' | grep "$mock_root")) || :
> > if [[ "$mounts" ]]; then
> > echo "Found mounted dirs inside the chroot $mock_root." \
> > "Trying to umount."
> > fi
> > for mount in "${mounts[@]}"; do
> > sudo umount --lazy "$mount" \
> > || {
> > echo "ERROR: Failed to umount $mount."
> > failed=true
> > this_chroot_failed=true
> > }
> > done
> > if ! $this_chroot_failed; then
> > sudo rm -rf "$mock_root"
> > fi
> > done
> >
> > if $failed; then
> > echo "Aborting."
> > exit 1
> > fi
> >
> > # remove mock system cache, we will setup proxies to do the caching and
> > this
> > # takes lots of space between runs
> > shopt -u nullglob
> > sudo rm -Rf /var/cache/mock/*
> >
> > # restore the permissions in the working dir, as sometimes it leaves files
> > # owned by root and then the 'cleanup workspace' from jenkins job fails to
> > # clean and breaks the jobs
> > sudo chown -R "$USER" "$WORKSPACE"
> >
> > [ovirt_4.0_he-system-tests] $ /bin/bash -xe /tmp/
> > hudson2195449912416509862.sh
> > + echo shell-scripts/mock_cleanup.sh
> > shell-scripts/mock_cleanup.sh
> > + shopt -s nullglob
> > + WORKSPACE=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/>
> > + cat
> > _______________________________________________________________________
> > #######################################################################
> > # #
> > # CLEANUP #
> > # #
> > #######################################################################
> > + logs=(./*log ./*/logs)
> > + [[ -n ./ovirt-system-tests/logs ]]
> > + tar cvzf exported-artifacts/logs.tgz ./ovirt-system-tests/logs
> > ./ovirt-system-tests/logs/
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.he_
> > basic_suite_4.0.sh/
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > he_basic_suite_4.0.sh/he_basic_suite_4.0.sh.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.install_packages/
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > install_packages/state.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > install_packages/build.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > install_packages/stdout_stderr.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > install_packages/root.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/state.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/build.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > init/stdout_stderr.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.init/root.log
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.clean_rpmdb/
> > ./ovirt-system-tests/logs/mocker-fedora-23-x86_64.fc23.
> > clean_rpmdb/stdout_stderr.log
> > + rm -rf ./ovirt-system-tests/logs
> > + failed=false
> > + mock_confs=("$WORKSPACE"/*/mocker*)
> > + for mock_conf_file in '"${mock_confs[@]}"'
> > + [[ -n <http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/mocker-fedora-23-x86_64.fc23.cfg> ]]
> > + echo 'Cleaning up mock '
> > Cleaning up mock
> > + mock_root=mocker-fedora-23-x86_64.fc23.cfg
> > + mock_root=mocker-fedora-23-x86_64.fc23
> > + my_mock=/usr/bin/mock
> > + my_mock+=' --configdir=<http://jenkins.ovirt.org/job/ovirt_4.0_he-
> > system-tests/ws/ovirt-system-tests'>
> > + my_mock+=' --root=mocker-fedora-23-x86_64.fc23'
> > + my_mock+=' --resultdir=<http://jenkins.ovirt.org/job/ovirt_4.0_he-
> > system-tests/ws/'>
> > + echo 'Killing all mock orphan processes, if any.'
> > Killing all mock orphan processes, if any.
> > + /usr/bin/mock --configdir=<http://jenkins.ovirt.org/job/ovirt_4.0_he-
> > system-tests/ws/ovirt-system-tests> --root=mocker-fedora-23-x86_64.fc23
> > --resultdir=<http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/>
> > --orphanskill
> > WARNING: Could not find required logging config file: <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/logging.ini.> Using default...
> > INFO: mock.py version 1.2.18 starting (python version = 3.4.3)...
> > Start: init plugins
> > INFO: selinux enabled
> > Finish: init plugins
> > Start: run
> > Finish: run
> > ++ grep -Po '(?<=config_opts\['\''root'\''\] = '\'')[^'\'']*' <
> > http://jenkins.ovirt.org/job/ovirt_4.0_he-system-tests/ws/
> > ovirt-system-tests/mocker-fedora-23-x86_64.fc23.cfg>
> > + mock_root=fedora-23-x86_64-0c362156a2fa4a935ea8b988eb73b2ad
> > + [[ -n fedora-23-x86_64-0c362156a2fa4a935ea8b988eb73b2ad ]]
> > + mounts=($(mount | awk '{print $3}' | grep "$mock_root"))
> > ++ mount
> > ++ awk '{print $3}'
> > ++ grep fedora-23-x86_64-0c362156a2fa4a935ea8b988eb73b2ad
> > + :
> > + [[ -n '' ]]
> > + false
> > + shopt -u nullglob
> > + sudo rm -Rf /var/cache/mock/fedora-23-x86_64-
> > 0c362156a2fa4a935ea8b988eb73b2ad
> > + sudo chown -R jenkins <http://jenkins.ovirt.org/job/
> > ovirt_4.0_he-system-tests/ws/>
> > POST BUILD TASK : SUCCESS
> > END OF POST BUILD TASK : 1
> > Recording test results
> > Archiving artifacts
> >
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> <https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>
--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
[View Less]
8 years, 2 months
[JIRA] (OVIRT-757) [VDSM] Add Travis CI to vdsm github project
by danken (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-757?page=com.atlassian.jira... ]
danken commented on OVIRT-757:
------------------------------
Thanks.
https://travis-ci.org/oVirt/vdsm/jobs/171955938 was triggered
automatically right after I've merged a patch to vdsm's gerrit.
> [VDSM] Add Travis CI to vdsm github project
> -------------------------------------------
>
> Key: OVIRT-757
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-757
…
[View More]> Project: oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: Nir Soffer
> Assignee: Barak Korren
>
> Hi all,
> Vdsm includes now a travis configuration running vdsm tests
> on both Fedora 24 and Centos 7.
> You can see example build shere:
> https://travis-ci.org/nirs/vdsm/builds/166263228
> These builds are currently configured in my private vdsm
> fork on github.
> We want to enable Travis CI service on vdsm github project,
> so each time a patch is merged (sync from gerrit) or each time
> a pull request is submitted, a build is started.
> Developers can use these builds in 2 ways:
> - fork vdsm and enable travis on their fork
> - submit pull requests to get them tested
> When a build fails, travis sends email to the author about the
> failure, this works fine for developers forks without any
> additional configuration.
> You can configure github for us, or give Dan and/or me the
> permissions to configure the github project.
> Cheers,
> Nir
--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
[View Less]
8 years, 2 months
[JIRA] (OVIRT-757) [VDSM] Add Travis CI to vdsm github project
by Barak Korren (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-757?page=com.atlassian.jira... ]
Barak Korren commented on OVIRT-757:
------------------------------------
Enabled travis on vds repo:
https://travis-ci.org/oVirt/vdsm
Please check if it works for you
> [VDSM] Add Travis CI to vdsm github project
> -------------------------------------------
>
> Key: OVIRT-757
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-757
> Project: …
[View More]oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: Nir Soffer
> Assignee: Barak Korren
>
> Hi all,
> Vdsm includes now a travis configuration running vdsm tests
> on both Fedora 24 and Centos 7.
> You can see example build shere:
> https://travis-ci.org/nirs/vdsm/builds/166263228
> These builds are currently configured in my private vdsm
> fork on github.
> We want to enable Travis CI service on vdsm github project,
> so each time a patch is merged (sync from gerrit) or each time
> a pull request is submitted, a build is started.
> Developers can use these builds in 2 ways:
> - fork vdsm and enable travis on their fork
> - submit pull requests to get them tested
> When a build fails, travis sends email to the author about the
> failure, this works fine for developers forks without any
> additional configuration.
> You can configure github for us, or give Dan and/or me the
> permissions to configure the github project.
> Cheers,
> Nir
--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
[View Less]
8 years, 2 months
[JIRA] (OVIRT-761) Re: Do we or can we have Mac OS slaves in Jenkins?
by Juan Hernández (oVirt JIRA)
[ https://ovirt-jira.atlassian.net/browse/OVIRT-761?page=com.atlassian.jira... ]
Juan Hernández commented on OVIRT-761:
--------------------------------------
CC [~omachace(a)redhat.com] Travis CI integration for the Python SDK is now enabled. For future releases it would be good to make sure that the build in Mac OS X works before actually releasing the new version.
> Re: Do we or can we have Mac OS slaves in Jenkins?
> --------------------------------------------------
>
> …
[View More] Key: OVIRT-761
> URL: https://ovirt-jira.atlassian.net/browse/OVIRT-761
> Project: oVirt - virtualization made easy
> Issue Type: By-EMAIL
> Reporter: sbonazzo
> Assignee: Barak Korren
>
> Opening a ticket.
> On Mon, Oct 10, 2016 at 11:44 AM, Juan Hernández <jhernand(a)redhat.com>
> wrote:
> > Hello,
> >
> > Part of the Ruby SDK uses native code that needs to be compiled during
> > installation of the gem. The Ruby SDK is a requirement of ManageIQ, and
> > many ManageIQ developers/users use Mac OS. In the past I had some issues
> > with this environment, as the C compiler there behaves in an slightly
> > different way than GCC. Those issues were discovered only when the SDK
> > was already released. To avoid that I would like to have Jenkins jobs
> > building/testing the SDK for Mac OS. Is that possible? Do we have Mac OS
> > slaves? If not, can we have them?
> >
> > Regards,
> > Juan Hernandez
> >
> > --
> > Dirección Comercial: C/Jose Bardasano Baos, 9, Edif. Gorbea 3, planta
> > 3ºD, 28016 Madrid, Spain
> > Inscrita en el Reg. Mercantil de Madrid – C.I.F. B82657941 - Red Hat S.L.
> > _______________________________________________
> > Infra mailing list
> > Infra(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/infra
> >
> --
> Sandro Bonazzola
> Better technology. Faster innovation. Powered by community collaboration.
> See how it works at redhat.com
> <https://www.redhat.com/it/about/events/red-hat-open-source-day-2016>
--
This message was sent by Atlassian JIRA
(v1000.482.3#100017)
[View Less]
8 years, 2 months