Hello!
From my experience, updating my setup - 3 nodes + 1 engine ESXi VM;
all F19
ovirt 3.2 -> 3.3 -.3.4 and now 3.4.1 with all 3 nodes acting also as
gluster replicated nodes - you may also run into gluster issues -
compatibility between different gluster version, slit-brain etc. -
client&server when you apply gluster updates
On Mon, May 12, 2014 at 9:47 AM, Itamar Heim <iheim(a)redhat.com> wrote:
On 05/12/2014 12:43 AM, Paul Heinlein wrote:
> On Sun, 11 May 2014, Itamar Heim wrote:
>
> I just inherited an oVirt cluster at work that's running 3.2 on
>>> Fedora 18 and would dearly love some direction about updating
>>> things without a system-wide downtime.
>>>
>>
>> 1. no downtime should happen, as you can upgrade hosts by moving
>> them to maintenance, which will live migrate VMs running on them.
>>
>> 2. fedora 18 is EOL, so 'yum update' isn't going to help here.
>>
>> 3. re-install host should refresh vdsm and its dependent packages if
>> the repo's configured have newer versions. when the host is in
>> maint, you can simply run 'yum update' on the host, then
>> re-activate it (of course reboot if kernel got updated, etc.)
>>
>
> I've followed those steps, and I can get VMs migrated *to* the updated
> nodes. I have two worries:
>
> 1. VMs migrated to the updated nodes cannot be migrated back to the
> old nodes. So once I start, it's all-in or nothing. Is that to be
> expected?
>
on fedora hosts this can happen, since there is no live migration
compatibility between fedora versions.
on .el6 hosts this shouldn't happen[1]
> 2. Once all the nodes are backed up, is there a fairly sure-fire way
> to update the master engine? Is that documented somewhere?
>
a. you should be able to upgrade engine or hosts regardless of each
other.
b. the setup (upgrade) script takes a db backup just in case. if you
happen to run it in a VM, snapshot/backing it up doesn't hurt as
well.
c. if upgrading to a new minor version, note to change in the upgraded
engine the cluster and DC level to benefit from new features.
> Thanks!
>
>
[1] there is a known caveat is you have selinux disabled on some machines
and enabled on others.