[Ovirt] [CQ weekly status] [07-12-2018]
by Dafna Ron
Hi,
This mail is to provide the current status of CQ and allow people to review
status before and after the weekend.
Please refer to below colour map for further information on the meaning of
the colours.
*CQ-4.2*: YELLOW (#1)
The main issues were packaging issues
1. we needed to replace the 7.6 image due to segmentation faults during
add_host.
2. missing dependencies for qemu-kvm-ev - fixed with patch:
https://gerrit.ovirt.org/#/c/96055/
*CQ-Master:* YELLOW (#1)
1. we needed to replace the 7.6 image due to segmentation faults during
add_host (same as 4.2)
2. vdsm project moved to STDCI v2 which caused failures due to check-patch
job which has been failing for a while. although it created failures, vdsm
still continued running the V1 jobs which meant tested repo still had
updated vdsm packages.
Happy week!
Dafna
-------------------------------------------------------------------------------------------------------------------
COLOUR MAP
Green = job has been passing successfully
** green for more than 3 days may suggest we need a review of our test
coverage
1.
1-3 days GREEN (#1)
2.
4-7 days GREEN (#2)
3.
Over 7 days GREEN (#3)
Yellow = intermittent failures for different projects but no lasting or
current regressions
** intermittent would be a healthy project as we expect a number of
failures during the week
** I will not report any of the solved failures or regressions.
1.
Solved job failures YELLOW (#1)
2.
Solved regressions YELLOW (#2)
Red = job has been failing
** Active Failures. The colour will change based on the amount of time the
project/s has been broken. Only active regressions would be reported.
1.
1-3 days RED (#1)
2.
4-7 days RED (#2)
3.
Over 7 days RED (#3)
5 years, 11 months
Fwd: [CentOS-announce] Announcing the release of Gluster 5 on CentOS Linux 7 x86_64
by Sandro Bonazzola
fyi
---------- Forwarded message ---------
From: Niels de Vos <ndevos(a)redhat.com>
Date: gio 6 dic 2018, 20:00
Subject: [CentOS-announce] Announcing the release of Gluster 5 on CentOS
Linux 7 x86_64
To: <centos-announce(a)centos.org>
I am happy to announce the General Availability of Gluster 5 for CentOS
7 on x86_64. These packages are following the upstream Gluster Community
releases, and will receive monthly bugfix updates.
Gluster 5 is expected to receive updates until the end of October 2019.
The maintanance and release schedule can be found at:
https://www.gluster.org/community/release-schedule/
Users of CentOS 7 can now simply install Gluster 5 with only these two
commands:
# yum install centos-release-gluster
# yum install glusterfs-server
The centos-release-gluster package is delivered via CentOS Extras repos.
This contains all the metadata and dependency information, needed to
install Gluster 5. The actual package that will get installed is
centos-release-gluster5. Users of Gluster 4.1 can stay on that release
until June 2019.
Users of Gluster 4.0 and earlier will need to manually upgrade by
uninstalling the centos-release-gluster-legacy package, and replacing it
with either the Gluster 5 or 4.1 version. Additional details about the
upgrade process are linked in the announcement from the Gluster
Community:
https://lists.gluster.org/pipermail/announce/2018-October/000115.html
We have a quickstart guide specifically built around the packages are
available, it makes for a good introduction to Gluster and will help get
you started in just a few simple steps, this quick start is available at
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart
More details about the packages that the Gluster project provides in the
Storage SIG is available in the documentation:
https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster
The centos-release-gluster* repositories offer additional packages that
enhance the usability of Gluster itself. Utilities and tools that were
working with previous versions of Gluster are expected to stay working
fine. If there are any problems, or requests for additional tools and
applications to be provided, just send us an email with your
suggestions. The current list of packages that is (planned to become)
available can be found here:
https://wiki.centos.org/SpecialInterestGroup/Storage/Gluster/Ecosystem-pkgs
We welcome all feedback, comments and contributions. You can get in
touch with the CentOS Storage SIG on the centos-devel mailing list
(https://lists.centos.org ) and with the Gluster developer and user
communities at https://www.gluster.org/mailman/listinfo , we are also
available on irc at #gluster on irc.freenode.net, and on twitter at
@gluster .
Cheers,
Niels de Vos
Storage SIG member & Gluster maintainer
_______________________________________________
CentOS-announce mailing list
CentOS-announce(a)centos.org
https://lists.centos.org/mailman/listinfo/centos-announce
5 years, 11 months
[ANN] oVirt 4.3.0 Second Alpha Release is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the Second
Alpha Release of oVirt 4.3.0, as of December 6th, 2018
This is pre-release software. This pre-release should not to be used in
production.
Please take a look at our community page[1] to learn how to ask questions
and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This update is the second alpha release of the 4.3.0 version.
This release brings more than 80 enhancements and more than 330 bug fixes
on top of oVirt 4.2 series.
What's new in oVirt 4.3.0?
* Q35 chipset, support booting using UEFI and Secure Boot
* Skylake-server and AMD EPYC support
* New smbus driver in windows guest tools
* Improved support for v2v
* Tech preview for oVirt on Fedora 28
* Hundreds of bug fixes on top of oVirt 4.2 series
* New VM portal details page (see a preview here:
https://imgur.com/a/ExINpci)
* New Cluster upgrade UI
* OVN security groups
* IPv6 (static host addresses)
* LLDP Labeler
* Support for 3.6 and 4.0 data centers, clusters and hosts were removed
* Now using PostgreSQL 10
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
* oVirt Node 4.3 (available for x86_64 only)
Experimental tech preview for x86_64 and s390x architectures for Fedora 28
is also included.
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available for both CentOS 7 and Fedora 28
(tech preview).
- oVirt Node NG is already available for both CentOS 7
- oVirt Node NG for Fedora 28 (tech preview) is facing build issues on
Jenkins and will be released as soon as possible
Additional Resources:
* Read more about the oVirt 4.3.0 release highlights:
http://www.ovirt.org/release/4.3.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.3.0/
[4] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 11 months
More logs from VNC/qemu/SASL
by Tomasz Barański
Hello
I'm working on supporting VNC console on FIPS-enabled hosts[1]. I made qemu
use SASL as authentication method instead of regular passwords. However, no
matter what I do, I can't get it to accept credentials provided with a VNC
client.
Is there a way to get some qemu/SASL logs? I need to understand why the
credentials are not accepted.
Tomo
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1595536
5 years, 11 months
planned Jenkins restart
by Evgheni Dereveanchin
Hi everyone,
I'll be performing a planned Jenkins restart within the next hour.
No new jobs will be scheduled during this maintenance period.
I will inform you once it is over.
Regards,
Evgheni Dereveanchin
5 years, 11 months