
----- Original Message -----
From: "Karli Sjöberg" <Karli.Sjoberg@slu.se> To: didi@redhat.com Cc: iheim@redhat.com, users@ovirt.org Sent: Wednesday, February 12, 2014 9:56:52 AM Subject: Re: [Users] [ovirt-test-day-2] Testing all-in-one feature on f19
On Wed, 2014-02-12 at 02:51 -0500, Yedidyah Bar David wrote:
----- Original Message -----
From: "Karli Sjöberg" <Karli.Sjoberg@slu.se> To: iheim@redhat.com Cc: users@ovirt.org Sent: Wednesday, February 12, 2014 9:37:14 AM Subject: Re: [Users] [ovirt-test-day-2] Testing all-in-one feature on f19
On Tue, 2014-02-11 at 18:40 +0200, Itamar Heim wrote:
On 02/11/2014 06:19 PM, Alexander Wels wrote:
Same issue for me, I did a minimum fedora 19 install added the appropriate repositories. Completed almost successfully, host didn't come up with vdsm compatibility error message.
The vdsm on the host is: vdsm-4.13.3-3.fc19.x86_64
I tried the same 3.3 dc/cluster and the host came up immediately.
Alexander
On Tuesday, February 11, 2014 11:02:37 AM Moti Asayag wrote:
Hi,
In the 3.4 ovirt-test-day-2 I've tested the all-in-one feature. I installed the all-in-one setup on a vm.
The installation ended almost successfully, except of the vdsm service, where the the host didn't become operational due to lack of support in clusterLevel >= 3.4:
Feb 11 15:51:52 localhost vdsm root ERROR VIR_MIGRATE_ABORT_ON_ERROR not found in libvirt, support for clusterLevel >= 3.4 is disabled. For Fedora 19 users, please consider upgrading libvirt from the virt-preview repository
Once I created a new 3.3 DC and configured a local storage, the local host become operational and I was able to create a vm and to run it.
Thanks, Moti _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
no one tested on .el6? _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Was stuck on watch duty first half of the day so was short on time and had to continue the rest of the testing today.
It might be worth noting on the wiki page[1] that if you´re a tester (like we are) that does change the contents of the '(el6| fedora)'-ovirt.repo (like enabling "ovirt-3.4.0-prerelease" e.g.), that when you later update the ovirt-release-'(el6|fedora)' rpm, the repo file gets saved as an ".rpmnew" instead. Was just luck we saw that in yum and move-replaced our "el6-ovirt.repo" with the "el6-ovirt.repo.rpmnew" before continuing with the rest of the instructions.
Perhaps something simple like: "Note: When done with your testing, please remember to change "enabled" back to "0" to avoid having duplicate repo files"
As for the upgrade, it was actually done first from 3.3.3-RC to 3.3.3-GA, and then up to 3.4.0-beta2. Both of which went through without major issues. Minor issue going up to 3.4.0 was that "engine-setup" still changed the access rights to the predefined ISO domain in "/etc/exports" to "None" (empty python variable?),
Indeed. It's mentioned in "known issues". Will hopefully be fixed in a few days.
Ah, cool.
but other than that, it went smoothly by.
Tested making a live snapshot on a VM that failed, big surprise. Logs are attached.
Will continue testing the new power-saving policy as soon as we can manage to find another Host to incorporate into the cluster.
Oh, and we tried using the "ovirt-log-collector" to compile the logs but it failed when trying to gather hypervisor data, just after we typed in the password for "admin@internal". Even when executing with the option "--no-hypervisors" it still tried to do that?!
I think it should work. Can you send logs please?
Does the log-collector log it´s own logging?:) Or what logs are you referring to?
Sorry for being unclear. Output of the command, preferably with --verbose (--log-file does write to a log, btw...). Thanks. -- Didi