[Users] FC19 upgrade, oVirt 3.3 and node problems

Itamar Heim iheim at redhat.com
Mon Sep 16 21:47:19 UTC 2013


On 08/29/2013 10:44 AM, Alin Dragomir wrote:
> Hello,
>
> I need a little bit of help to get my setup in a stable state... ;( I
> used to have this configuration on three physical machines:
> - engine - oVirt 3.2.1/FC18
> - host1 - FC18
> - host2 - FC18
> - NAS with NFS data domains
>
> It all started with my mistake: upgrading the engine to FC19 without
> checking oVirt compatibility first. Of course, afterwards ovirt-engine
> wouldn't start anymore, but I had my VM's running on the two hosts and I
> thought I'd wait it out until 3.3 is released.
> Today my email server VM crashed and I cannot start it manually (tried
> variations of the qemu-kvm command line, created tap  devices, tried a
> few other things without success).
> After a while, I bit the bullet and enabled the oVirt 3.3 repo, and did
> engine-upgrade. This seem to have worked fine except for a missing link
> for Java which I fixed, so now I can access the web admin portal. I can
> see the hosts and VM's that are running or stopped but I cannot start
> any VM, error message is:
>
>     qemu-kvm: -drive file... Permissions denied
>
> I checked the path and everything is accessible on the host.
>
> Since I didn't have any other critical VM's on one of the host I made
> possibly the second mistake: downloaded the oVirt node ISO and
> reinstalled from scratch. First it wouldn't register due to a UUID
> collision (crappy motherboard manufacturers) but I was able to make it
> report a valid UUID and worked around that.
> When I try to Approve the node, it goes through most of the steps but
> fails with:
>
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND       2013-08-28 07:37:26 DEBUG
>     otopi.context context._executeMethod:119 Stage closeup METHOD
>     otopi.plugins.ovirt_host_deploy.node.persist.Plugin._closeup
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND       2013-08-28 07:37:26 DEBUG
>     otopi.context context._executeMethod:133 method exception
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND       Traceback (most recent call
>     last):
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND         File
>     "/tmp/ovirt-S4pEN1vbH3/pythonlib/otopi/context.py", line 123, in
>     _executeMethod
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND           method['method']()
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND         File
>     "/tmp/ovirt-S4pEN1vbH3/otopi-plugins/ovirt-host-deploy/node/persist.py",
>     line 51, in _closeup
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND           from ovirtnode import
>     ovirtfunctions
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND         File
>     "/usr/lib/python2.7/site-packages/ovirtnode/ovirtfunctions.py", line
>     34, in <module>
>     2013-08-28 07:37:26 DEBUG otopi.plugins.otopi.dialog.machine
>     dialog.__logString:215 DIALOG:SEND       ImportError: could not
>     import gobject (could not find _PyGObject_API object)
>
>
> I downgraded the node from 3.1.0-0.999.5.vdsm.fc19 to
> 3.0.0-5.1.6.vdsm.fc19 but ran into the same error.
> I noticed the Python scripts being run during the deployment come from
> ovirt-host-deploy.tar, but I don't know where that comes from. I renamed
> it and it was recreated.
> I checked the Python packages and the node does have pygobject2 v2.28.6
> installed. Import gobject from Phyton doesn't report any errors and I
> even tried to run a modified persist.py on the node and it works just
> fine, imports the functions and persists a file I gave it as a parameter.
>
> At this point, I'm lost; the only thing I can think of would be to
> reinstall Fedora (FC19?) on the bad node and to try to re-add the node -
> but I have a guess I'll see the exact the same error. Or maybe enable
> the nightly repo?...
>
> Thank you.
>
> PS. I set up a temporary mail gateway so I don't lose incoming mail but
> it's getting a lot of spam, plus I don't have IMAP access to emails as
> that server was doing amavisd spam filtering and IMAP besides Postfix. A
> way to start the VM manually on the "good" host would definitely be a
> workable temporary solution.
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

was this resolved?



More information about the Users mailing list