python-cpopen
by Sandro Bonazzola
Hi,
I'm looking at python-cpopen package as missing dependency for building VDSM within CentOS Virt SIG.
I see that Fedora and EPEL have version 1.3 but I see that our git repository has the following tags:
$ git remote -v
origin git://gerrit.ovirt.org/cpopen (fetch)
origin git://gerrit.ovirt.org/cpopen (push)
$ git tag --list
1.3.0
1.3.1
v1.4
We don't have jenkins build for cpopen, we rely on Fedora and EPEL releases.
I see also that the tar.gz used for building the Fedora RPM is not matching any of the above tags:
http://bronhaim.fedorapeople.org/cpopen-1.3.tar.gz
and it doesn't exist anymore.
I suggest to build cpopen-1.4 from v1.4 tag and publish the .tar.gz on oVirt site.
Then bump version on Fedora and EPEL to 1.4 accordingly.
Please keep aligned tags and tarball versions or it will become a nightmare to track issues between them.
That said, is v1.4 to be considered as formally released? Should I build from that tag for CentOS Virt SIG?
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 9 months
oVirt Node weekly meeting
by Fabian Deutsch
------=_Part_3417252_1759919631.1424096638070
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
The following meeting has been modified:
Subject: oVirt Node weekly meeting
Organiser: "Fabian Deutsch" <fdeutsch(a)redhat.com>
Location: irc://irc.oftc.net#ovirt
Time: 4:00:00 PM - 4:30:00 PM GMT +01:00 Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna [MODIFIED]
Recurrence : Every Monday No end date Effective 16 Feb, 2015
Invitees: devel(a)ovirt.org
*~*~*~*~*~*~*~*~*~*
Hey,
this is an invitation to the weekly oVirt Node devel meetings.
Anyone interested or related to Node development is welcome.
------=_Part_3417252_1759919631.1424096638070
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body>
<pre style="font-family: monospace; font-size: 14px">
The following meeting has been modified:
Subject: oVirt Node weekly meeting
Organiser: "Fabian Deutsch" <fdeutsch(a)redhat.com>
Location: irc://irc.oftc.net#ovirt
Time: 4:00:00 PM - 4:30:00 PM GMT +01:00 Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna [MODIFIED]
Recurrence : Every Monday No end date Effective 16 Feb, 2015
Invitees: devel(a)ovirt.org
*~*~*~*~*~*~*~*~*~*
Hey,
this is an invitation to the weekly oVirt Node devel meetings.
Anyone interested or related to Node development is welcome.</pre>
</body></html>
------=_Part_3417252_1759919631.1424096638070
Content-Type: text/calendar; charset=utf-8; method=REQUEST; name=meeting.ics
Content-Transfer-Encoding: 7bit
BEGIN:VCALENDAR
PRODID:Zimbra-Calendar-Provider
VERSION:2.0
METHOD:REQUEST
BEGIN:VTIMEZONE
TZID:Europe/Berlin
BEGIN:STANDARD
DTSTART:16010101T030000
TZOFFSETTO:+0100
TZOFFSETFROM:+0200
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=10;BYDAY=-1SU
TZNAME:CET
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T020000
TZOFFSETTO:+0200
TZOFFSETFROM:+0100
RRULE:FREQ=YEARLY;WKST=MO;INTERVAL=1;BYMONTH=3;BYDAY=-1SU
TZNAME:CEST
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:880f897a-b75e-47b7-abf6-24d1e4894600
RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=MO
SUMMARY:oVirt Node weekly meeting
DESCRIPTION:Hey\, \n\nthis is an invitation to the weekly oVirt Node devel m
eetings. \nAnyone interested or related to Node development is welcome.
LOCATION:irc://irc.oftc.net#ovirt
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=FALSE:mailto:devel@
ovirt.org
ORGANIZER;CN=Fabian Deutsch:mailto:fdeutsch@redhat.com
DTSTART;TZID="Europe/Berlin":20150216T160000
DTEND;TZID="Europe/Berlin":20150216T163000
STATUS:CONFIRMED
CLASS:PUBLIC
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
TRANSP:OPAQUE
LAST-MODIFIED:20150216T142358Z
DTSTAMP:20150216T142358Z
SEQUENCE:10
BEGIN:VALARM
ACTION:DISPLAY
TRIGGER;RELATED=START:-PT5M
DESCRIPTION:Reminder
END:VALARM
END:VEVENT
END:VCALENDAR
------=_Part_3417252_1759919631.1424096638070--
9 years, 9 months
custom exit codes from engine-setup/cleanup
by Yedidyah Bar David
Hi all,
The patch [1] for [2] is mostly ready.
The patch [3] for otopi was already merged (by Alon) to master.
It includes the following codes:
0 - success
1 - general error - the default if there was an error
2 - initialization error - returned if there was an error while loading
plugins, before actually starting to run them
These apply to all otopi-based tools.
patch [1], not merged yet, currently adds this code, affecting engine-cleanup:
11 - cleanup called without setup.
Returned in addition to the existing message:
Could not detect a completed product setup
Please use the cleanup utility only after a setup or after an upgrade from an older installation.
Comments are welcome:
What other codes do you want to see?
For automated testing or any other use?
Do you think we should group them and allocate somehow?
Just add a code whenever needed?
If last is 'Yes', no need to think very much about the previous ones...
[1] http://gerrit.ovirt.org/35503
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1079726
[3] http://gerrit.ovirt.org/35632
Thanks,
--
Didi
9 years, 9 months
why is it called vdsm?
by Greg Sheremeta
Why is it called vdsm and not something a little more descriptive and project-related, like "ovirt-agent"?
Greg Sheremeta
Red Hat, Inc.
Sr. Software Engineer, RHEV
Cell: 919-807-1086
gshereme(a)redhat.com
9 years, 9 months
[HC] Weird issue while deploying hosted-engine
by Sandro Bonazzola
While deploying Hosted Engine on Hyper Converged Gluster Storage I've hit the following issue.
ovirt-hosted-engine-setup:
[ INFO ] Engine replied: DB Up!Welcome to Health Status!
Enter the name of the cluster to which you want to add the host (Default) [Default]:
[ INFO ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO ] Still waiting for VDSM host to become operational...
Accessed to the engine, VM and engine working correctly, installed using 3.5 snapshot repositories.
host-deploy started on the host (see attached logs) and got stuck at:
2015-02-12 13:08:33 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:215 DIALOG:SEND ### Response is VALUE TIME=type:value or ABORT TIME
until several minutes later when:
2015-02-12 13:24:31 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
File "/tmp/ovirt-ePMfjwrlpe/pythonlib/otopi/context.py", line 142, in _executeMethod
method['method']()
File "/tmp/ovirt-ePMfjwrlpe/otopi-plugins/otopi/system/clock.py", line 111, in _set_clock
format="YYYYmmddHHMMSS+0000"
File "/tmp/ovirt-ePMfjwrlpe/otopi-plugins/otopi/dialog/machine.py", line 235, in queryValue
opcode, variable = self._readline().split(' ', 1)
File "/tmp/ovirt-ePMfjwrlpe/pythonlib/otopi/dialog.py", line 259, in _readline
raise IOError(_('End of file'))
IOError: End of file
resumed the process.
Around the time when host-deploy got stucked I see in sanlock logs:
2015-02-12 13:08:25+0100 7254 [683]: open error -1
/rhev/data-center/mnt/glusterSD/minidell.home:_hosted__engine__glusterfs/a7dba6e3-09ac-430a-8d6e-eea17cf18b8f/images/51156c24-5ea1-42df-bf33-daebf2c4780c/3719b0
06-68a2-4742-964c-413a3f77c0b4.lease
2015-02-12 13:08:25+0100 7254 [683]: r4 release_token open error -2
2015-02-12 13:08:31+0100 7260 [16910]: a7dba6e3 aio collect 0 0x7f1d640008c0:0x7f1d640008d0:0x7f1d7ec52000 result -107:0 match res
2015-02-12 13:08:31+0100 7260 [16910]: s3 delta_renew read rv -107 offset 0
/rhev/data-center/mnt/glusterSD/minidell.home:_hosted__engine__glusterfs/a7dba6e3-09ac-430a-8d6e-eea17cf18b8f/dom_md/ids
2015-02-12 13:08:31+0100 7260 [16910]: s3 renewal error -107 delta_length 0 last_success 7239
2015-02-12 13:08:31+0100 7260 [16910]: a7dba6e3 aio collect 0 0x7f1d640008c0:0x7f1d640008d0:0x7f1d7ec52000 result -107:0 match res
2015-02-12 13:08:31+0100 7260 [16910]: s3 delta_renew read rv -107 offset 0
/rhev/data-center/mnt/glusterSD/minidell.home:_hosted__engine__glusterfs/a7dba6e3-09ac-430a-8d6e-eea17cf18b8f/dom_md/ids
2015-02-12 13:08:31+0100 7260 [16910]: s3 renewal error -107 delta_length 0 last_success 7239
2015-02-12 13:08:32+0100 7261 [16910]: a7dba6e3 aio collect 0 0x7f1d640008c0:0x7f1d640008d0:0x7f1d7ec52000 result -107:0 match res
2015-02-12 13:08:32+0100 7261 [16910]: s3 delta_renew read rv -107 offset 0
/rhev/data-center/mnt/glusterSD/minidell.home:_hosted__engine__glusterfs/a7dba6e3-09ac-430a-8d6e-eea17cf18b8f/dom_md/ids
2015-02-12 13:08:32+0100 7261 [16910]: s3 renewal error -107 delta_length 0 last_success 7239
And in VDSM logs:
MainThread::DEBUG::2015-02-12 13:08:25,529::vdsm::58::vds::(sigtermHandler) Received signal 15
MainThread::DEBUG::2015-02-12 13:08:25,529::protocoldetector::144::vds.MultiProtocolAcceptor::(stop) Stopping Acceptor
MainThread::INFO::2015-02-12 13:08:25,530::__init__::565::jsonrpc.JsonRpcServer::(stop) Stopping JsonRPC Server
Detector thread::DEBUG::2015-02-12 13:08:25,530::protocoldetector::115::vds.MultiProtocolAcceptor::(_cleanup) Cleaning Acceptor
MainThread::INFO::2015-02-12 13:08:25,531::vmchannels::188::vds::(stop) VM channels listener was stopped.
MainThread::INFO::2015-02-12 13:08:25,535::momIF::91::MOM::(stop) Shutting down MOM
MainThread::DEBUG::2015-02-12 13:08:25,535::task::595::Storage.TaskManager.Task::(_updateState) Task=`c52bd13a-24f0-4d8a-97d8-f6c53ca8bddc`::moving
from state init -> state preparing
MainThread::INFO::2015-02-12 13:08:25,535::logUtils::44::dispatcher::(wrapper) Run and protect: prepareForShutdown(options=None)
Thread-11::DEBUG::2015-02-12 13:08:25,535::storageServer::706::Storage.ConnectionMonitor::(_monitorConnections) Monitoring stopped
ioprocess communication (16795)::ERROR::2015-02-12 13:08:25,530::__init__::152::IOProcessClient::(_communicate) IOProcess failure
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", line 107, in _communicate
raise Exception("FD closed")
Exception: FD closed
ioprocess communication (16795)::DEBUG::2015-02-12 13:08:25,537::__init__::298::IOProcessClient::(_run) Starting IOProcess...
Followed by:
MainThread::INFO::2015-02-12 13:08:25,548::domainMonitor::137::Storage.DomainMonitor::(close) Stopping domain monitors
MainThread::INFO::2015-02-12 13:08:25,549::domainMonitor::114::Storage.DomainMonitor::(stopMonitoring) Stop monitoring
a7dba6e3-09ac-430a-8d6e-eea17cf18b8f
Thread-94::DEBUG::2015-02-12 13:08:25,549::domainMonitor::192::Storage.DomainMonitorThread::(_monitorLoop) Stopping domain monitor for
a7dba6e3-09ac-430a-8d6e-eea17cf18b8f
Thread-94::INFO::2015-02-12 13:08:25,549::clusterlock::245::Storage.SANLock::(releaseHostId) Releasing host id for domain
a7dba6e3-09ac-430a-8d6e-eea17cf18b8f (id: 1)
Thread-94::DEBUG::2015-02-12 13:08:25,549::domainMonitor::201::Storage.DomainMonitorThread::(_monitorLoop) Unable to release the host id 1 for domain
a7dba6e3-09ac-430a-8d6e-eea17cf18b8f
Traceback (most recent call last):
File "/usr/share/vdsm/storage/domainMonitor.py", line 198, in _monitorLoop
self.domain.releaseHostId(self.hostId, unused=True)
File "/usr/share/vdsm/storage/sd.py", line 480, in releaseHostId
self._clusterLock.releaseHostId(hostId, async, unused)
File "/usr/share/vdsm/storage/clusterlock.py", line 252, in releaseHostId
raise se.ReleaseHostIdFailure(self._sdUUID, e)
ReleaseHostIdFailure: Cannot release host id: (u'a7dba6e3-09ac-430a-8d6e-eea17cf18b8f', SanlockException(16, 'Sanlock lockspace remove failure',
'Device or resource busy'))
supervdsm logs doesn't show anything interesting but:
# service supervdsmd status
Redirecting to /bin/systemctl status supervdsmd.service
supervdsmd.service - "Auxiliary vdsm service for running helper functions as root"
Loaded: loaded (/usr/lib/systemd/system/supervdsmd.service; static)
Active: inactive (dead) since gio 2015-02-12 13:08:30 CET; 27min ago
Main PID: 15859 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/supervdsmd.service
feb 12 13:08:30 minidell.home systemd[1]: Stopping "Auxiliary vdsm service for running helper functions as root"...
feb 12 13:08:30 minidell.home daemonAdapter[15859]: Traceback (most recent call last):
feb 12 13:08:30 minidell.home daemonAdapter[15859]: File "/usr/lib64/python2.7/multiprocessing/util.py", line 268, in _run_finalizers
feb 12 13:08:30 minidell.home daemonAdapter[15859]: finalizer()
feb 12 13:08:30 minidell.home daemonAdapter[15859]: File "/usr/lib64/python2.7/multiprocessing/util.py", line 201, in __call__
feb 12 13:08:30 minidell.home daemonAdapter[15859]: res = self._callback(*self._args, **self._kwargs)
feb 12 13:08:30 minidell.home daemonAdapter[15859]: OSError: [Errno 2] No such file or directory: '/var/run/vdsm/svdsm.sock'
feb 12 13:08:30 minidell.home python[15859]: DIGEST-MD5 client mech dispose
feb 12 13:08:30 minidell.home python[15859]: DIGEST-MD5 common mech dispose
feb 12 13:08:30 minidell.home systemd[1]: Stopped "Auxiliary vdsm service for running helper functions as root".
After a while the VM became unreachable as ovirt-hosted-engine-setup shows:
2015-02-12 13:10:33 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:183 Error fetching host state:
[ERROR]::oVirt API connection failure, [Errno 110] Connection timed out
2015-02-12 13:10:33 DEBUG otopi.plugins.ovirt_hosted_engine_setup.engine.add_host add_host._wait_host_ready:189 VDSM host in state
Gluster logs don't show anything special.
I'm guessing that the 15 minutes of host-deploy being stuck caused the issue here, but I don't see any relevant change that may have caused it.
Attaching the tar.gz with the code used for ovirt-hosted-engine-setup; the rest is 3.5-snapshot over RHEL 7.1
Any hint is welcome.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 9 months
Intro
by ilya musayev
Hi oVirt Devel Team,
Thought i would introduce myself being newbie here, my name is ilya and
i work for one if the larger IT companies in Bay Area.
I'm interested in learning more about what oVirt does, especially from
developers points of view as we may have some integration initiatives.
I'm also actively involved in Apache CloudStack project as PMC and
Committer.
Thanks
ilya
9 years, 9 months
Column and Cell moves and renames
by Greg Sheremeta
Hi,
These two patches were just merged [1] [2]. I cleaned up the table columns and cells a bit.
Anything that was abstract was renamed "Abstract-". For example, SortableColumn was renamed to AbstractSortableColumn. This is a Java convention.
I split the *.column packages into *.column and *.cell. Anything that is a Cell now lives in *.cell.
Please keep it maintained this way :)
Thanks,
Greg
[1] http://gerrit.ovirt.org/#/c/36742/
[2] http://gerrit.ovirt.org/#/c/36745/
Greg Sheremeta
Red Hat, Inc.
Sr. Software Engineer, RHEV
Cell: 919-807-1086
gshereme(a)redhat.com
9 years, 9 months
Re: [ovirt-devel] Can't join vdsm(4.17.0-390.gita3163f3.el7.x86_64) to Cluster 3.6
by Oved Ourfali
On Feb 11, 2015 10:49 PM, Michael Burman <mburman(a)redhat.com> wrote:
>
> Hello Yaniv and all!
>
> So i did this:
> I tried to install clean rhel7.1 host with the right libvirt version(libvirt-1.2.8-16.el7.x86_64,vdsm-4.17.0-394.gitb237397.el7.x86_64) and failed to join this host under 3.6 cluster.
> Host red-vds2.qa.lab.tlv.redhat.com is compatible with versions (3.0,3.1,3.2,3.3,3.4,3.5) and cannot join Cluster mb3.6 which is set to version 3.6.
> After failing i checked the /usr/share/vdsm/dsaversion.py and saw that:
> version_info = {
> 'version_name': version_name,
> 'software_version': software_version,
> 'software_revision': software_revision,
> 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'],
> 'supportedProtocols': ['2.2', '2.3'],
> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'],
>
> So i added the 3.6 for cluster levels and engine support lines, restarted vdsm and finally managed to join my rhel7.1 host to 3.6 Cluster.
>
> All of this means that rhel7.1 hosts with vdsm 4.17.0 and libvirt version -libvirt-1.2.8-16.el7.x86_64 are still not supported by default for 3.6 clusters and dsaversion.py file must be edited manually in order to join to cluster 3.6.
> I guess this must be changed.
>
+1 on that one.
> Best regards,
> Michael B
>
>
> ----- Original Message -----
> From: "Michael Burman" <mburman(a)redhat.com>
> To: "ybronhei" <ybronhei(a)redhat.com>
> Cc: devel(a)ovirt.org, infra(a)ovirt.org
> Sent: Monday, February 9, 2015 10:50:28 AM
> Subject: Re: [ovirt-devel] Can't join vdsm(4.17.0-390.gita3163f3.el7.x86_64) to Cluster 3.6
>
> Thank you all!
>
> Ok, so how can i get this right libvirt version for rhel7? not7.1 and without any dependencies issues??
> I want to join rhel7 host, with the right libvirt version to 3.6 cluster..is it possible?
>
> Michael B
>
>
> ----- Original Message -----
> From: "ybronhei" <ybronhei(a)redhat.com>
> To: "Oved Ourfali" <oourfali(a)redhat.com>, "Sandro Bonazzola" <sbonazzo(a)redhat.com>, "Dan Kenigsberg" <danken(a)redhat.com>
> Cc: infra(a)ovirt.org, devel(a)ovirt.org, "Michael Burman" <mburman(a)redhat.com>
> Sent: Monday, February 9, 2015 10:03:56 AM
> Subject: Re: [ovirt-devel] Can't join vdsm(4.17.0-390.gita3163f3.el7.x86_64) to Cluster 3.6
>
> On 02/09/2015 08:35 AM, Oved Ourfali wrote:
> >
> > On Feb 9, 2015 9:31 AM, Sandro Bonazzola <sbonazzo(a)redhat.com> wrote:
> >>
> >> Il 08/02/2015 16:15, Michael Burman ha scritto:
> >>> Hi everyone!
> >>>
> >>> Is any one knows when it will be possible to join vdsm-4.17.0 to 3.6 Cluster??
> >>> I know that the libvirt version is not supported, but isn't should be? even if it's rhel7..?
> >>>
> >>> I changed the /usr/share/vdsm/dsaversion.py file to:
> >>> 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5', '3.6'],
> >>> 'supportedProtocols': ['2.2', '2.3'],
> >>> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5', '3.6'],
> >>>
> >>> But caps.py is still:
> >>> if not hasattr(libvirt, 'VIR_MIGRATE_AUTO_CONVERGE'):
> >>> return _dropVersion('3.6',
> >>> 'VIR_MIGRATE_AUTO_CONVERGE not found in libvirt,'
> >>> ' support for clusterLevel >= 3.6 is disabled.'
> >>> ' For Fedora 20 users, please consider upgrading'
> >>> ' libvirt from the virt-preview repository')
> >>>
> >>> if not hasattr(libvirt, 'VIR_MIGRATE_COMPRESSED'):
> >>> return _dropVersion('3.6',
> >>> 'VIR_MIGRATE_COMPRESSED not found in libvirt,'
> >>> ' support for clusterLevel >= 3.6 is disabled.'
> >>> ' For Fedora 20 users, please consider upgrading'
> >>> ' libvirt from the virt-preview repository')
> >>>
> >>> libvirt is still disabled for 3.6 clusters...
> >>>
> >>>
> >>> I already tried to rebuild from fedora(with Sandro appreciated help), but had a lot of issues dependencies.
> >>>
> >>> I'm running on-
> >>> ovirt-engine-3.6.0-0.0.master.20150206122208.gitd429375.el6.noarch
> >>> vdsm-4.17.0-390.gita3163f3.el7.x86_64
> >>> libvirt-1.1.1-29.el7_0.7.x86_64
> >>>
> >>> I would like to know if and when it will be possible to join rhel7 host(vdsm-4.17.0) to Cluster 3.6?
> >>
> >> Moving to devel list, not really an infra issue.
> >>
> >
> > Yaniv, you had a patch for that, but it was abandoned. Why?
> because we don't need it- i explained there
> the cluster check is only after the supportedVDSMVersion does not catch.
> we failed to add the host to the cluster because the libvirt version .
> as it is now, with the right versions, you should be able to add vdsm
> 4.17 to lastest engine regardless the cluster level
> >
> >>>
> >>> Best regards,
> >>>
> >>> Michael B
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> Sandro Bonazzola
> >> Better technology. Faster innovation. Powered by community collaboration.
> >> See how it works at redhat.com
> >> _______________________________________________
> >> Devel mailing list
> >> Devel(a)ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/devel
>
>
> --
> Yaniv Bronhaim.
>
> --
> Michael Burman
> Intern in RedHat Israel, RHEV-M QE Network Team
>
> Mobile: 054-5355725
> IRC: mburman
> _______________________________________________
> Infra mailing list
> Infra(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/infra
>
> --
> Michael Burman
> Intern in RedHat Israel, RHEV-M QE Network Team
>
> Mobile: 054-5355725
> IRC: mburman
> _______________________________________________
> Devel mailing list
> Devel(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
9 years, 9 months
[QE][ACTION REQUIRED] oVirt 3.6.0 status
by Sandro Bonazzola
Hi, here's an update on 3.6 status on integration / rel-eng side
ACTION: Feature proposed for 3.6.0 must now be collected in the 3.6 Google doc [1] and reviewed by maintainers.
Finished the review process, the remaining key milestones for this release will be scheduled.
For reference, external project schedules we're tracking are:
Fedora 22: 2015-05-19
Fedora 20 End Of Life:2015-06-19 (1 month after Fedora 22 release)
Foreman 1.8.0: 2015-04-01
GlusterFS 3.7: 2015-04-29
OpenStack Kilo: 2015-04-30
QEMU 2.3.0: 2015-03-27
The tracker bug for 3.6.0 [2] currently shows no blockers.
There are 545 bugs [3] targeted to 3.6.0.
Excluding node and documentation bugs we have 503 bugs [4] targeted to 3.6.0.
NEW ASSIGNED POST Total
1 0 0 1
docs 11 0 0 11
gluster 25 2 1 28
i18n 2 0 0 2
infra 80 10 3 93
integration 59 4 2 65
network 29 2 8 39
node 26 2 3 31
ppc 0 0 1 1
sla 56 3 1 60
spice 1 0 0 1
storage 76 4 5 85
ux 35 1 3 39
virt 74 6 9 89
Total 475 34 36 545
oVirt Live for 3.6 has been rebased on EL7. Nightly builds are available here: http://jenkins.ovirt.org/job/ovirt_live_create_iso/
[1] http://goo.gl/9X3G49
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1155425
[3] https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_release%3A3.6....
[4] http://goo.gl/ZbUiMc
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 9 months