[Users] local variable 'volPath' referenced before assignment

Hi, I've updated my oVirt engine and node from version 3.1.0-2 to the more recent 3.1.0-4. As far as I can tell from the update log, the engine update went fine by following these instructions: https://www.rvanderlinden.net/wordpress/ovirt/engine-installation/engine-upg... Now I'm unable to start a VM through the Admin Portal. It fails with the following error message: VM test_srv is down. Exit message: local variable 'volPath' referenced before assignment. What's wrong? It seems this message is related to this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=843387 But apparently there is no solution. What is the fix or workaround to get VMs working again? Thanks - Frank

Hi Frank, It looks like the same issue as in the bug, the bug also references a Change-Id for a fix: I8ad50c3a3485812f57800bbe6b7318a90fe5b962 and you can also access this patch in the following link: http://gerrit.ovirt.org/#/c/6794/2 Can you tell if the vdsm version installed on your host includes this patch? (you can check under /usr/share/vdsm/clientIF.py). If it's in there please send the full logs (engine+vdsm) and the bug might need to be reopened, otherwise you can just upgrade vdsm and hopefully it would solve the problem. Regards, Yeela ----- Original Message -----
From: "Frank Wall" <fwall@inotronic.de> To: users@ovirt.org Sent: Tuesday, January 8, 2013 7:02:50 PM Subject: [Users] local variable 'volPath' referenced before assignment
Hi,
I've updated my oVirt engine and node from version 3.1.0-2 to the more recent 3.1.0-4. As far as I can tell from the update log, the engine update went fine by following these instructions: https://www.rvanderlinden.net/wordpress/ovirt/engine-installation/engine-upg...
Now I'm unable to start a VM through the Admin Portal. It fails with the following error message:
VM test_srv is down. Exit message: local variable 'volPath' referenced before assignment.
What's wrong? It seems this message is related to this bug report: https://bugzilla.redhat.com/show_bug.cgi?id=843387 But apparently there is no solution.
What is the fix or workaround to get VMs working again?
Thanks - Frank _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Yeela, On Tue, Jan 08, 2013 at 12:39:08PM -0500, Yeela Kaplan wrote:
Can you tell if the vdsm version installed on your host includes this patch? (you can check under /usr/share/vdsm/clientIF.py).
well, I'm not sure if this patch is included in my version, but according to the output of diff is seems that it is actually NOT included: --- clientIF_new-617e328d546570a94e4357b3802a062e6a7610cb.py 2012-08-08 14:52:28.000000000 +0200 +++ /usr/share/vdsm/clientIF.py 2012-10-04 22:46:42.000000000 +0200 [...skipping other differences...] @@ -289,15 +255,11 @@ if drive['device'] == 'cdrom': volPath = supervdsm.getProxy().mkIsoFs(vmId, files) elif drive['device'] == 'floppy': - volPath = \ - supervdsm.getProxy().mkFloppyFs(vmId, files) + volPath = supervdsm.getProxy().mkFloppyFs(vmId, files) - elif "path" in drive: + elif drive.has_key("path"): volPath = drive['path'] - else: - raise vm.VolumeError(drive) - # For BC sake: None as argument elif not drive: volPath = drive Apparently the part from the fix with "raise vm.VolumeError(drive)" is missing, although I'm running a newer version of vdsm. According to the bug report at https://bugzilla.redhat.com/show_bug.cgi?id=843387 the fix should be in vdsm-4.9.6-29.0 (RHEL6), while I'm running vdsm-4.10.0-10.fc17.x86_64: # rpm -q --whatprovides /usr/share/vdsm/clientIF.py vdsm-4.10.0-10.fc17.x86_64 I must admit that this is oVirt on FC17 and not RHEV on RHEL, so this may explain the different versions of vdsm.
If it's in there please send the full logs (engine+vdsm) and the bug might need to be reopened, otherwise you can just upgrade vdsm and hopefully it would solve the problem.
I've attached the full logs. It contains all log entries from activating the ovirt node until trying to start the VM (both engine+vdsm). Thanks - Frank

On Wed, Jan 09, 2013 at 05:54:22PM +0100, Frank Wall wrote:
I've attached the full logs. It contains all log entries from activating the ovirt node until trying to start the VM (both engine+vdsm).
The vdsm.log was missing/empty, so please find the required logs attached. Thanks - Frank

----- Original Message -----
From: "Frank Wall" <fwall@inotronic.de> To: users@ovirt.org Sent: Wednesday, January 9, 2013 6:54:22 PM Subject: Re: [Users] local variable 'volPath' referenced before assignment
Hi Yeela,
On Tue, Jan 08, 2013 at 12:39:08PM -0500, Yeela Kaplan wrote:
Can you tell if the vdsm version installed on your host includes this patch? (you can check under /usr/share/vdsm/clientIF.py).
well, I'm not sure if this patch is included in my version, but according to the output of diff is seems that it is actually NOT included:
--- clientIF_new-617e328d546570a94e4357b3802a062e6a7610cb.py 2012-08-08 14:52:28.000000000 +0200 +++ /usr/share/vdsm/clientIF.py 2012-10-04 22:46:42.000000000 +0200
[...skipping other differences...]
@@ -289,15 +255,11 @@ if drive['device'] == 'cdrom': volPath = supervdsm.getProxy().mkIsoFs(vmId, files) elif drive['device'] == 'floppy': - volPath = \ - supervdsm.getProxy().mkFloppyFs(vmId, files) + volPath = supervdsm.getProxy().mkFloppyFs(vmId, files)
- elif "path" in drive: + elif drive.has_key("path"): volPath = drive['path']
- else: - raise vm.VolumeError(drive) -
Frank, it looks like you don't have the patch inside your version of vdsm. Please add it and see if it solves the problem.
# For BC sake: None as argument elif not drive: volPath = drive
Apparently the part from the fix with "raise vm.VolumeError(drive)" is missing, although I'm running a newer version of vdsm. According to the bug report at https://bugzilla.redhat.com/show_bug.cgi?id=843387 the fix should be in vdsm-4.9.6-29.0 (RHEL6), while I'm running vdsm-4.10.0-10.fc17.x86_64:
# rpm -q --whatprovides /usr/share/vdsm/clientIF.py vdsm-4.10.0-10.fc17.x86_64
I must admit that this is oVirt on FC17 and not RHEV on RHEL, so this may explain the different versions of vdsm.
If it's in there please send the full logs (engine+vdsm) and the bug might need to be reopened, otherwise you can just upgrade vdsm and hopefully it would solve the problem.
I've attached the full logs. It contains all log entries from activating the ovirt node until trying to start the VM (both engine+vdsm).
Thanks - Frank
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (2)
-
Frank Wall
-
Yeela Kaplan