[Users] [QE] oVirt 3.4.0 status
by Sandro Bonazzola
Hi,
oVirt 3.4.0 RC has been released and is currently on QA.
While we're preparing for this week Test Day on 2014-03-06
a few blockers have been opened.
The bug tracker [1] shows the following bugs blocking the release:
Whiteboard Bug ID Status Summary
infra 1070742 POST [database] support postgres user length within schema version
infra 1071536 POST Notifier doesn't send any notifications via email
integration 1067058 POST [database] old psycopg2 does not accept unicode string as port name
integration 1069193 POST Release maven artifacts with correct version numbers
integration 1072307 POST remote database cannot be used
virt 1069201 ASSIGNED [REST]: Missing domain field on VM\Template object.
virt 1071997 POST VM is not locked on run once
All remaining bugs have been re-targeted to 3.4.1.
Maintainers / Assignee:
- Please remember to rebuild your packages before 2014-03-11 09:00 UTC if you want them to be included in 3.4.0 GA.
- Please add the bugs to the tracker if you think that 3.4.0 should not be released without them fixed.
- Please provide ETA on blockers bugs and fix them as soon as possible
- Please fill release notes, the page has been created here [2]
- Please update http://www.ovirt.org/OVirt_3.4_TestDay before 2014-02-19
Be prepared for upcoming oVirt 3.4.0 Test Day on 2014-03-06!
Thanks to all people already testing 3.4.0 RC!
[1] https://bugzilla.redhat.com/1024889
[2] http://www.ovirt.org/OVirt_3.4.0_release_notes
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months
[Users] Recommended way to disconnect and remove iSCSI direct LUNs
by Boyan Tabakov
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VQiJjAHh4OqnW9GHojIg6s1LSghwPBraw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hello,
I have ovirt 3.3.2 running with FC19 nodes. I have several virtual
machines that use directly attached iSCSI LUNs. Discovering, attaching
and using new LUNs works without issues (vdsm needed some patching to
work with Dell Equallogic, as described here
https://sites.google.com/a/keele.ac.uk/partlycloudy/ovirt, but that's a
separate issue). Also live migration works well between hosts and the
LUNs get properly attached to the migration target host.
However, I don't see any way to disconnect/remove LUNs that are no
longer needed (e.g. VM is removed). What is the recommended way to
remove old LUNs, so that the underlying iSCSI sessions are disconnected?
Especially if a VM has been migrated between hosts, it leaves the LUNs
connected on multiple nodes.
Thank you in advance!
Best regards,
Boyan Tabakov
--VQiJjAHh4OqnW9GHojIg6s1LSghwPBraw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iEYEARECAAYFAlMN4OIACgkQXOXFG4fgV77UqgCg2dNkHFiaLHkieT/Cao407JpH
7SMAoIoyTcqIgz+mzjTGW+SfbKs7vnEp
=wBiV
-----END PGP SIGNATURE-----
--VQiJjAHh4OqnW9GHojIg6s1LSghwPBraw--
10 years, 8 months
[Users] [QE] oVirt 3.3.5 status
by Sandro Bonazzola
Hi,
now that 3.3.4 has been released, it's time to look at 3.3.5.
Here is the tentative timeline:
General availability: 2014-04-09
RC Build: 2014-04-02
Nightly builds are available enabling the oVirt 3.3 snapshots repositories:
# yum-config-manager --enable ovirt-3.3-snapshot
# yum-config-manager --enable ovirt-3.3-snapshot-static
As you can see there won't be any beta release before RC for 3.3.z and the
same will be for 3.4.z.
Now we've nightly builds also for stable branches so you can test them
whenever you want to. If you're going to test it, please add yourself
as tester on [3]
Note to maintainers:
* For Release candidate builds, we'll send to all maintainers
a reminder the week before the build on Thursday morning (UTC timezone).
Packages that won't be ready before the announced compose time won't be
added to the release candidate.
Please remember to build your packages the day before repository
composition if you want it in.
* Release notes must be filled [1]
* A tracker bug has been created [2] and shows 1 bug blocking the release:
Whiteboard Bug ID Summary
virt 1071997 VM is not locked on run once
* Please add bugs to the tracker if you think that 3.3.5 should not be released without them fixed.
[1] http://www.ovirt.org/OVirt_3.3.5_release_notes
[2] http://bugzilla.redhat.com/1071867
[3] http://www.ovirt.org/Testing/oVirt_3.3.5_testing
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months
[Users] [ANN] oVirt 3.3.4 release
by Sandro Bonazzola
The oVirt development team is pleased to announce the general
availability of oVirt 3.3.4 as of March 4th 2014. This release
solidifies oVirt as a leading KVM management application and open
source alternative to VMware vSphere.
oVirt is available now for Fedora 19 and Red Hat Enterprise Linux 6.5
(or similar).
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-stable has been updated for delivering this
release without the need of enabling any other repository.
A new oVirt Node build is also available [2].
[1] http://www.ovirt.org/OVirt_3.3.4_release_notes
[2] http://resources.ovirt.org/releases/3.3.4/iso/ovirt-node-iso-3.0.4-1.0.20...
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 8 months
[Users] Setting up an ovirt-node
by Andy Michielsen
Hello,
I did a clean install of a ovirt-node with the iso provided from ovirt.
Everything went fine until I logged on with the admin user and configured
the ovirt-engines address.
Now I don't have any network connection any more.
I have 2 nic's available and defined only the first one with a static IP.
When I check the network settings in the admin menu it tells me I have
serveral bond devices.
If I log on with a root user it states that under
/etc/sysconfig/network_scripts that there is a ifcfg_em1, ifcfg_ovirtmgmt
and a ifcfg_brem1.
The two last devices use the same static ip that I defined on the ifcfg_em1.
How can I get my network back up and runnig as I will need this to connect
to the engine which is running on an other server.
Kind regards.
10 years, 8 months
[Users] Which iso to use when installing an ovirt-node
by Andy Michielsen
Hello,
Which iso should I use to install an ovirt-node ?
What the difference between the el and fc versions. (I'm installing the
engine on a centos 6.5 minimal server.)
Which version should I use.
Kind regards.
10 years, 8 months
Re: [Users] SPICE causes migration failure?
by Dafna Ron
Thanks Ted,
Please send logs to the user's list since others may help if I am off line
Thanks,
Dafna
On 03/03/2014 11:48 PM, Ted Miller wrote:
> Dafna, I will get the logs to you when I get a chance. I have an
> intern to keep busy this week, and that gets higher priority than
> oVirt (unfortunately). Ted Miller
>
> On 3/3/2014 12:26 PM, Dafna Ron wrote:
>> I don't see a reason why open monitor will fail migration - at most,
>> if there is a problem I would close the spice session on src and
>> restarted it at the dst.
>> can you please attach vdsm/libvirt/qemu logs from both hosts and
>> engine logs so that we can see the migration failure reason?
>>
>> Thanks,
>> Dafna
>>
>>
>>
>> On 03/03/2014 05:16 PM, Ted Miller wrote:
>>> I just got my Data Center running again, and am proceeding with some
>>> setup & testing.
>>>
>>> I created a VM (not doing anything useful)
>>> I clicked on the "Console" and had a SPICE console up (viewed in Win7).
>>> I had it printing the time on the screen once per second (while
>>> date;do sleep 1; done).
>>> I tried to migrate the VM to another host and got in the GUI:
>>>
>>> Migration started (VM: web1, Source: s1, Destination: s3, User:
>>> admin@internal).
>>>
>>> Migration failed due to Error: Fatal error during migration (VM:
>>> web1, Source: s1, Destination: s3).
>>>
>>> As I started the migration I happened to think "I wonder how they
>>> handle the SPICE console, since I think that is a link from the host
>>> to my machine, letting me see the VM's screen."
>>>
>>> After the failure, I tried shutting down the SPICE console, and
>>> found that the migration succeeded. I again opened SPICE and had a
>>> migration fail. Closed SPICE, migration failed.
>>>
>>> I can understand how migrating SPICE is a problem, but, at least
>>> could we give the victim of this condition a meaningful error
>>> message? I have seen a lot of questions about failed migrations
>>> (mostly due to attached CDs), but I have never seen this discussed.
>>> If I had not had that particular thought cross my brain at that
>>> particular time, I doubt that SPICE would have been where I went
>>> looking for a solution.
>>>
>>> If this is the first time this issue has been raised, I am willing
>>> to file a bug.
>>>
>>> Ted Miller
>>> Elkhart, IN, USA
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
--
Dafna Ron
10 years, 8 months