[Users] [ANN] oVirt 3.4.0 Release Candidate is now available

The oVirt team is pleased to announce that the 3.4.0 Release Candidate is now available for testing. Release notes and information on the changes for this update are still being worked on and will be available soon on the wiki[1]. Please ensure to follow install instruction from release notes if you're going to test it. The existing repository ovirt-3.4.0-prerelease has been updated for delivering this release candidate and future refreshes until final release. An oVirt Node iso is already available, unchanged from third beta. You're welcome to join us testing this release candidate in next week test day [2] scheduled for 2014-03-06! [1] http://www.ovirt.org/OVirt_3.4.0_release_notes [2] http://www.ovirt.org/OVirt_3.4_Test_Day -- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com

You're welcome to join us testing this release candidate in next week test day [2] scheduled for 2014-03-06!
[1] http://www.ovirt.org/OVirt_3.4.0_release_notes [2] http://www.ovirt.org/OVirt_3.4_Test_Day
Known issues should list some information about Gluster I think. Such as the fact that libgfapi is not currently being used even when choosing GlusterFS instead of POSIXFS, instead it creates a Posix mount and uses that. This was an advertised 3.3 feature, so this would be considered a regression or known issue, right? I was told it was due to BZ #1017289 This has been observed in Fedora 19, though that BZ lists RHEL6. Thanks! -Brad

On 02/28/2014 06:24 PM, Brad House wrote:
You're welcome to join us testing this release candidate in next week test day [2] scheduled for 2014-03-06!
[1] http://www.ovirt.org/OVirt_3.4.0_release_notes [2] http://www.ovirt.org/OVirt_3.4_Test_Day
Known issues should list some information about Gluster I think. Such as the fact that libgfapi is not currently being used even when choosing GlusterFS instead of POSIXFS, instead it creates a Posix mount and uses that. This was an advertised 3.3 feature, so this would be considered a regression or known issue, right?
it got disabled for 3.3.1 as well once we found the gap around snapshots.
I was told it was due to BZ #1017289
This has been observed in Fedora 19, though that BZ lists RHEL6.

Started testing this on two self-hosted clusters, with mixed results. There were updates from 3.4.0 beta 3. On both, got informed the system was going to reboot in 2 minutes while it was still installing yum updates. On the faster system, the whole update process finished before the 2 minutes were up, the VM restarted, and all appears normal. On the other, slower cluster, the 2 minutes hit while the yum updates were still being installed, and the system rebooted. It continued rebooting every 3 minutes or so, and the engine console web pages are not available because the engine doesn’t start. it did this at least 3 times before I went ahead and reran engine-setup, which completed successfully. The system stopped restarting and the web interface was available again. A quick perusal of system logs and engine-setup logs didn’t reveal what requested the reboot. That was rather impolite of something to do that without warning :) At least it was recoverable. Seems like scheduling the reboot while the yum updates were still running seems like a poor idea as well. -Darrell On Feb 28, 2014, at 10:11 AM, Sandro Bonazzola <sbonazzo@redhat.com> wrote:
The oVirt team is pleased to announce that the 3.4.0 Release Candidate is now available for testing.
Release notes and information on the changes for this update are still being worked on and will be available soon on the wiki[1]. Please ensure to follow install instruction from release notes if you're going to test it. The existing repository ovirt-3.4.0-prerelease has been updated for delivering this release candidate and future refreshes until final release.
An oVirt Node iso is already available, unchanged from third beta.
You're welcome to join us testing this release candidate in next week test day [2] scheduled for 2014-03-06!
[1] http://www.ovirt.org/OVirt_3.4.0_release_notes [2] http://www.ovirt.org/OVirt_3.4_Test_Day
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Darrell Budic" <darrell.budic@zenfire.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com> Cc: announce@ovirt.org, "engine-devel" <engine-devel@ovirt.org>, "arch" <arch@ovirt.org>, Users@ovirt.org, "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org> Sent: Saturday, March 1, 2014 1:56:23 AM Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is now available
Started testing this on two self-hosted clusters, with mixed results. There were updates from 3.4.0 beta 3.
On both, got informed the system was going to reboot in 2 minutes while it was still installing yum updates.
On the faster system, the whole update process finished before the 2 minutes were up, the VM restarted, and all appears normal.
On the other, slower cluster, the 2 minutes hit while the yum updates were still being installed, and the system rebooted. It continued rebooting every 3 minutes or so, and the engine console web pages are not available because the engine doesn’t start. it did this at least 3 times before I went ahead and reran engine-setup, which completed successfully. The system stopped restarting and the web interface was available again. A quick perusal of system logs and engine-setup logs didn’t reveal what requested the reboot.
That was rather impolite of something to do that without warning :) At least it was recoverable. Seems like scheduling the reboot while the yum updates were still running seems like a poor idea as well.
Can you please post relevant logs? hosts: /var/log/ovirt-hosted-engine-setup/*, /var/log/ovirt-hosted-engine-ha/*, /var/log/vdsm/* engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/* You can of course open a bug on bugzilla and attach there logs if you want. Thanks, and thanks for the report! -- Didi

Sounds like your hosts were not in maintenance mode while you were upgrading the engine which explains the 2 min reboot. This should be revealed by logs Regards Liviu On Sun, Mar 2, 2014 at 10:32 PM, Yedidyah Bar David <didi@redhat.com> wrote:
----- Original Message -----
From: "Darrell Budic" <darrell.budic@zenfire.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com> Cc: announce@ovirt.org, "engine-devel" <engine-devel@ovirt.org>, "arch" <arch@ovirt.org>, Users@ovirt.org, "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org> Sent: Saturday, March 1, 2014 1:56:23 AM Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is now available
Started testing this on two self-hosted clusters, with mixed results. There were updates from 3.4.0 beta 3.
On both, got informed the system was going to reboot in 2 minutes while it was still installing yum updates.
On the faster system, the whole update process finished before the 2 minutes were up, the VM restarted, and all appears normal.
On the other, slower cluster, the 2 minutes hit while the yum updates were still being installed, and the system rebooted. It continued rebooting every 3 minutes or so, and the engine console web pages are not available because the engine doesn't start. it did this at least 3 times before I went ahead and reran engine-setup, which completed successfully. The system stopped restarting and the web interface was available again. A quick perusal of system logs and engine-setup logs didn't reveal what requested the reboot.
That was rather impolite of something to do that without warning :) At least it was recoverable. Seems like scheduling the reboot while the yum updates were still running seems like a poor idea as well.
Can you please post relevant logs? hosts: /var/log/ovirt-hosted-engine-setup/*, /var/log/ovirt-hosted-engine-ha/*, /var/log/vdsm/* engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/*
You can of course open a bug on bugzilla and attach there logs if you want.
Thanks, and thanks for the report! -- Didi _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Sounds like your hosts were not in maintenance mode while you were = upgrading the engine which explains the 2 min reboot. =20 This should be revealed by logs=20 =20 Regards=20 Liviu =20 =20 On Sun, Mar 2, 2014 at 10:32 PM, Yedidyah Bar David <didi@redhat.com> = wrote: ----- Original Message -----
From: "Darrell Budic" <darrell.budic@zenfire.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com> Cc: announce@ovirt.org, "engine-devel" <engine-devel@ovirt.org>, = "arch" <arch@ovirt.org>, Users@ovirt.org, "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org> Sent: Saturday, March 1, 2014 1:56:23 AM Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is = now available
Started testing this on two self-hosted clusters, with mixed = results. There were updates from 3.4.0 beta 3.
On both, got informed the system was going to reboot in 2 minutes = while it was still installing yum updates.
On the faster system, the whole update process finished before the 2 = minutes were up, the VM restarted, and all appears normal.
On the other, slower cluster, the 2 minutes hit while the yum = updates were still being installed, and the system rebooted. It continued = rebooting every 3 minutes or so, and the engine console web pages are not available = because the engine doesn=92t start. it did this at least 3 times before I = went ahead and reran engine-setup, which completed successfully. The system = stopped restarting and the web interface was available again. A quick =
--Apple-Mail=_10F70940-5CF3-4BB1-967F-125FFD594CF3 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=windows-1252 Whups, yes, that was it: MainThread::INFO::2014-02-28 = 17:23:03,546::hosted_engine::1311::ovirt_hosted_engine_ha.agent.hosted_eng= ine.HostedEngine::(_stop_engine_vm) Shutting down vm using = `/usr/sbin/hosted-engine --vm-shutdown` MainThread::INFO::2014-02-28 = 17:23:04,500::hosted_engine::1315::ovirt_hosted_engine_ha.agent.hosted_eng= ine.HostedEngine::(_stop_engine_vm) stdout: Machine shut down which also explains why I didn=92t see anything in the engine logs, it = was the self-hosted HA triggering the reboot when the engine shut down = for the upgrade.. And I do remember the note about putting it into = global maintenance before upgrading. Now ;) Don=92t know if the engine is aware it=92s on a HA setup, if it it, = might be a good thing to check for and maybe enable itself during the = upgrade? Are there any other special procedures to be aware of in a self-hosted = setup? I haven=92t tried updating the VDSM hosts for these yet, for = instance. Seems like I shouldn=92t enable global maintenance there, so = the engine switches hosts properly? Thanks. -Darrell On Mar 2, 2014, at 2:35 PM, Liviu Elama <liviu.elama@gmail.com> wrote: perusal of
system logs and engine-setup logs didn=92t reveal what requested the = reboot.
That was rather impolite of something to do that without warning :) = At least it was recoverable. Seems like scheduling the reboot while the yum = updates were still running seems like a poor idea as well. =20 Can you please post relevant logs? hosts: /var/log/ovirt-hosted-engine-setup/*, = /var/log/ovirt-hosted-engine-ha/*, /var/log/vdsm/* engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/* =20 You can of course open a bug on bugzilla and attach there logs if you = want. =20 Thanks, and thanks for the report! -- Didi
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
--Apple-Mail=_10F70940-5CF3-4BB1-967F-125FFD594CF3 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=windows-1252 <html><head><meta http-equiv=3D"Content-Type" content=3D"text/html = charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; = -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Whups, = yes, that was it:<div><br></div><div><div style=3D"margin: 0px; = position: static; z-index: auto;">MainThread::INFO::2014-02-28 = 17:23:03,546::hosted_engine::1311::ovirt_hosted_engine_ha.agent.hosted_eng= ine.HostedEngine::(_stop_engine_vm) Shutting down vm using = `/usr/sbin/hosted-engine --vm-shutdown`</div><div style=3D"margin: = 0px;">MainThread::INFO::2014-02-28 = 17:23:04,500::hosted_engine::1315::ovirt_hosted_engine_ha.agent.hosted_eng= ine.HostedEngine::(_stop_engine_vm) stdout: Machine shut = down</div><div><b><br></b></div><div>which also explains why I didn=92t = see anything in the engine logs, it was the self-hosted HA triggering = the reboot when the engine shut down for the upgrade.. And I do remember = the note about putting it into global maintenance before upgrading. Now = ;)</div><div><br></div><div>Don=92t know if the engine is aware it=92s = on a HA setup, if it it, might be a good thing to check for and maybe = enable itself during the upgrade?</div><div><br></div><div>Are there any = other special procedures to be aware of in a self-hosted setup? I = haven=92t tried updating the VDSM hosts for these yet, for instance. = Seems like I shouldn=92t enable global maintenance there, so the engine = switches hosts = properly?</div><div><br></div><div>Thanks.</div><div><br></div><div> = -Darrell</div><div><br></div><div><div>On Mar 2, 2014, at 2:35 PM, = Liviu Elama <<a = href=3D"mailto:liviu.elama@gmail.com">liviu.elama@gmail.com</a>> = wrote:</div><br class=3D"Apple-interchange-newline"><blockquote = type=3D"cite"><div dir=3D"ltr"><div><div><div>Sounds like your hosts = were not in maintenance mode while you were upgrading the engine which = explains the 2 min reboot.<br><br></div>This should be revealed by logs = <br><br></div>Regards <br> </div>Liviu<br></div><div class=3D"gmail_extra"><br><br><div = class=3D"gmail_quote">On Sun, Mar 2, 2014 at 10:32 PM, Yedidyah Bar = David <span dir=3D"ltr"><<a href=3D"mailto:didi@redhat.com" = target=3D"_blank">didi@redhat.com</a>></span> wrote:<br> <blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 = .8ex;border-left:1px #ccc solid;padding-left:1ex">----- Original Message = -----<br> > From: "Darrell Budic" <<a = href=3D"mailto:darrell.budic@zenfire.com">darrell.budic@zenfire.com</a>>= ;<br> > To: "Sandro Bonazzola" <<a = href=3D"mailto:sbonazzo@redhat.com">sbonazzo@redhat.com</a>><br> > Cc: <a href=3D"mailto:announce@ovirt.org">announce@ovirt.org</a>, = "engine-devel" <<a = href=3D"mailto:engine-devel@ovirt.org">engine-devel@ovirt.org</a>>, = "arch" <<a href=3D"mailto:arch@ovirt.org">arch@ovirt.org</a>>, <a = href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a>, "VDSM<br> > Project Development" <<a = href=3D"mailto:vdsm-devel@lists.fedorahosted.org">vdsm-devel@lists.fedorah= osted.org</a>><br> > Sent: Saturday, March 1, 2014 1:56:23 AM<br> > Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is = now available<br> ><br> > Started testing this on two self-hosted clusters, with mixed = results. There<br> > were updates from 3.4.0 beta 3.<br> ><br> > On both, got informed the system was going to reboot in 2 minutes = while it<br> > was still installing yum updates.<br> ><br> > On the faster system, the whole update process finished before the = 2 minutes<br> > were up, the VM restarted, and all appears normal.<br> ><br> > On the other, slower cluster, the 2 minutes hit while the yum = updates were<br> > still being installed, and the system rebooted. It continued = rebooting every<br> > 3 minutes or so, and the engine console web pages are not available = because<br> > the engine doesn=92t start. it did this at least 3 times before I = went ahead<br> > and reran engine-setup, which completed successfully. The system = stopped<br> > restarting and the web interface was available again. A quick = perusal of<br> > system logs and engine-setup logs didn=92t reveal what requested = the reboot.<br> ><br> > That was rather impolite of something to do that without warning :) = At least<br> > it was recoverable. Seems like scheduling the reboot while the yum = updates<br> > were still running seems like a poor idea as well.<br> <br> Can you please post relevant logs?<br> hosts: /var/log/ovirt-hosted-engine-setup/*, = /var/log/ovirt-hosted-engine-ha/*,<br> /var/log/vdsm/*<br> engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/*<br> <br> You can of course open a bug on bugzilla and attach there logs if you = want.<br> <br> Thanks, and thanks for the report!<br> <span class=3D"HOEnZb"><font color=3D"#888888">--<br> Didi<br> _______________________________________________<br> Users mailing list<br> <a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><br> <a href=3D"http://lists.ovirt.org/mailman/listinfo/users" = target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br> </font></span></blockquote></div><br></div> </blockquote></div><br></div></body></html>= --Apple-Mail=_10F70940-5CF3-4BB1-967F-125FFD594CF3--

Il 02/03/2014 10:32, Yedidyah Bar David ha scritto:
----- Original Message -----
From: "Darrell Budic" <darrell.budic@zenfire.com> To: "Sandro Bonazzola" <sbonazzo@redhat.com> Cc: announce@ovirt.org, "engine-devel" <engine-devel@ovirt.org>, "arch" <arch@ovirt.org>, Users@ovirt.org, "VDSM Project Development" <vdsm-devel@lists.fedorahosted.org> Sent: Saturday, March 1, 2014 1:56:23 AM Subject: Re: [vdsm] [Users] [ANN] oVirt 3.4.0 Release Candidate is now available
Started testing this on two self-hosted clusters, with mixed results. There were updates from 3.4.0 beta 3.
Did you set the clusters in global maintenance before starting the upgrade?
On both, got informed the system was going to reboot in 2 minutes while it was still installing yum updates.
On the faster system, the whole update process finished before the 2 minutes were up, the VM restarted, and all appears normal.
On the other, slower cluster, the 2 minutes hit while the yum updates were still being installed, and the system rebooted. It continued rebooting every 3 minutes or so, and the engine console web pages are not available because the engine doesn’t start. it did this at least 3 times before I went ahead and reran engine-setup, which completed successfully. The system stopped restarting and the web interface was available again. A quick perusal of system logs and engine-setup logs didn’t reveal what requested the reboot.
That was rather impolite of something to do that without warning :) At least it was recoverable. Seems like scheduling the reboot while the yum updates were still running seems like a poor idea as well.
Can you please post relevant logs? hosts: /var/log/ovirt-hosted-engine-setup/*, /var/log/ovirt-hosted-engine-ha/*, /var/log/vdsm/* engine: /var/log/ovirt-engine/setup/*, /var/log/ovirt-engine/*
You can of course open a bug on bugzilla and attach there logs if you want.
Thanks, and thanks for the report!
-- Sandro Bonazzola Better technology. Faster innovation. Powered by community collaboration. See how it works at redhat.com
participants (6)
-
Brad House
-
Darrell Budic
-
Itamar Heim
-
Liviu Elama
-
Sandro Bonazzola
-
Yedidyah Bar David