Tolik is already taking care of internal Jenkins related tasks, and therefor
I'd like to propose him to become a co-maintainer for the Node related Jenkins tasks.
This will help the Node team to have working Jenkins jobs.
If he is accepted, please grant him the necessary rights on Node related Jenkins jobs.
Content-Type: text/plain; charset="UTF-8"
Itamar, by the proxy of Brian, did asked me to look on the bounce issue
we have on the users lists. So after a few hours of careful log reading,
here is my finding.
The bounce situation
We (ml admin) get on a regular basis people who get unsubscribed and
message about bounce. People being unsubscribed automatically is
bad(tm), and bounces are annoying.=20
A first look show that our mails are bounced as they are marked as spam
by Google. Google doc on the matter do not give much, some people point
to using dkim, spf, etc. But spf is not for us, but for the sender, and
dkim is not ml friendly, afaik, and requires upstream support if I
Not all mails are bounced, which is good. That mean the ip is not
So I took a few hours to look on every bounce and roughly, there is 2
First group is that all mail from the same poster on the users list have
bounced at Google. Out of the 16 mail he sent, 16 have been rejected by
Google. I have no idea why, I suspect the spf policy, but it did looked
ok. None of the mail of answer had a issue, so that's likely not a
content problem. =20
However, the ip address of the sender is in the SORBS blacklist, so
that's likely what trigger Google spam filter.
Not much we can do, besides contacting him, which I will do.
Roughly, that's mail in this thread :
and the mails from Sandro :
Common point, use of goo.gl and ur1.ca. It turn out that both domain are
flagged as URI spam, since that's used by spammer to hide their link. So
I suspect that Gmail started to "learn" about them as spam, as the rest
of the world did :
Again, not much we can do, besides asking to people to not use these
services ( which is not gonna work I think ).
If the core issue is "people are kicked out due to bounce", we can look
at raising the threshold on mailman ( as proposed by Brian ), while at
the same time trying to reduce the number of bounce ( ie, a root cause
investigation on each bounce when we see issue ).=20
First part is easy ( I think ), second is not hard but we need to have
someone to look at log on a regular basis so that's taking some time.=20
As a side note, our spamassasin setup was blacklisted from the DNS BL we
used ( due to our use of the dns of linode.com :
http://uribl.com/refused.shtml ), thus reducing his efficiency. I did
fixed that by setting a local cache, following the page I gave. If
anything weird happen, please tell us :)
Anyone has a opinion or a idea ?
Open Source and Standards, Sysadmin
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
-----END PGP SIGNATURE-----
15:50:12 Failed tests:
15:50:12 scheduleARecurringJob(org.ovirt.engine.core.utils.timer.DBSchedulerUtilQuartzImplTest): Unexpected exception occured -Failed to obtain DB
connection from data source 'EngineDS': java.sql.SQLException: Connections could not be acquired from the underlying database!
15:50:12 scheduleAJob(org.ovirt.engine.core.utils.timer.DBSchedulerUtilQuartzImplTest): Unexpected exception occured -Failed to obtain DB connection
from data source 'EngineDS': java.sql.SQLException: Connections could not be acquired from the underlying database!
Not sure if it's infra or devel issue.
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
I wanted to suggest that we automate the change of oVirt 3.6 bug status
from MODIFIED to ON_QA based on nightly snapshot builds.
The motivation should be clear: when bugs are fixed on the master
branch, the fix becomes readily available for verification as soon as
the next snapshot is built (and there's indeed someone to verify on the
master branch - the same person who was working on the master branch and
opened the bug!).
We currently only move them to ON_QA on milestone builds (e.g. alpha,
beta), but I don't think that's right for an open source project - the
status of bugs (targeted to the nearest release) should be up-to-date
with the actual state of the master branch.
We've encountered the problem testing features for 3.6 the last couple
of weeks, and since it's gonna be a long version this situation will
likely occur often. So far I've been moving bugs to ON_QA myself, but I
don't really want to follow the snapshot builds manually (nor move the
bugs to ON_QA as soon as they're merged, in case the snapshot build
The downside would be that bugs could be VERIFIED at an early point in
the development, and later regressions could occur that would render the
verification obsolete. But this could happen just the same between
Content-Type: text/plain; charset=UTF-8
Build Number: 1361
Build Status: Failure
Triggered By: Triggered by Gerrit: http://gerrit.ovirt.org/37321
Changes Since Last Success:
Changes for Build #1338
[Yair Zaslavsky] setup: Changing task cleaner utility to also handle removal of commands
[Tolik Litovsky] Adds mail notifications on build failures
Changes for Build #1339
[Tomas Jelinek] userportal: Removed "os type" from list of basic user fields
Changes for Build #1340
[Maor Lipchuk] core: Add snapshot permission for live migration
Changes for Build #1341
[Gilad Chaplik] webadmin: display of template disk's disk profile
Changes for Build #1342
[Sandro Bonazzola] build: post ovirt-engine-3.5.1 branching
Changes for Build #1343
[Alon Bar-Lev] extapi: add default constructors for ExtKey, ExtUUID
Changes for Build #1344
[Tal Nisan] webadmin: Determine whether to fetch ISO domain images list correctly
Changes for Build #1345
[Yaniv Bronhaim] Due to commit a276f142 foreman id is not provided in vds static table
Changes for Build #1346
[Juan Hernandez] core: Support search disk by name
Changes for Build #1347
[Vered Volansky] core: Fine-tune Storage space alerts policy
Changes for Build #1348
[Liron Aravot] core: domain might remain locked on deactivation
Changes for Build #1349
[Alon Bar-Lev] aaa: normalize extension name within ENGINE_EXTENSION_ENABLED_
[David Caro] Added 3.5 to sdk-java jobs
[David Caro] Fixed triggers for ovirt-node
Changes for Build #1350
[Oved Ourfali] core: override SSL protocol to TLSv1
Changes for Build #1351
[Vered Volansky] core: Fix pom for 3.5.2
Changes for Build #1352
[Yedidyah Bar David] packaging: setup: sharpen the test for engine enabled
Changes for Build #1353
[Moti Asayag] engine: Bump openstack-java-sdk version to 3.0.6
Changes for Build #1354
[Gilad Chaplik] core: ovf_store isn't added
Changes for Build #1355
[Vered Volansky] core: Remove redundant threshold call
Changes for Build #1356
[pkliczewski] jsonrpc: version bump
[Tolik Litovsky] Adding HE plugin and VDSM plugin to be build in by default
Changes for Build #1357
[Amit Aviram] webadmin: Preventing moving a shareable disk to a Gluster domain
Changes for Build #1358
[Fred Rolland] webadmin : Change message on import data domain
Changes for Build #1359
[Maor Lipchuk] core: Use new permission for LSM
Changes for Build #1360
[Maor Lipchuk] restapi: Add API to support warning for attached Storage Domains
Changes for Build #1361
[Maor Lipchuk] core: Validate the return value of an internal query.
No tests ran.
I'm running 'yum update vdsm' from ovirt-master-snapshot every morning, and this morning i got the next error:
Error: Package: vdsm-4.17.0-333.gitbb706aa.el7.x86_64 (rhevm)
Requires: glusterfs >= 3.6
Installed: glusterfs-3.5.3-1.el7.x86_64 (@gluster)
glusterfs = 3.5.3-1.el7
Available: glusterfs-184.108.40.206rhs-1.el7.x86_64 (rhel)
glusterfs = 220.127.116.11rhs-1.el7
vdsm installed before trying to update:
Am i missing something?
Intern in RedHat Israel, RHEV-M QE Network Team