Hi Yeela,
On Tue, Jan 08, 2013 at 12:39:08PM -0500, Yeela Kaplan wrote:
Can you tell if the vdsm version installed on your host includes this
patch?
(you can check under /usr/share/vdsm/clientIF.py).
well, I'm not sure if this patch is included in my version, but according to
the output of diff is seems that it is actually NOT included:
--- clientIF_new-617e328d546570a94e4357b3802a062e6a7610cb.py 2012-08-08
14:52:28.000000000 +0200
+++ /usr/share/vdsm/clientIF.py 2012-10-04 22:46:42.000000000 +0200
[...skipping other differences...]
@@ -289,15 +255,11 @@
if drive['device'] == 'cdrom':
volPath = supervdsm.getProxy().mkIsoFs(vmId, files)
elif drive['device'] == 'floppy':
- volPath = \
- supervdsm.getProxy().mkFloppyFs(vmId, files)
+ volPath = supervdsm.getProxy().mkFloppyFs(vmId, files)
- elif "path" in drive:
+ elif drive.has_key("path"):
volPath = drive['path']
- else:
- raise vm.VolumeError(drive)
-
# For BC sake: None as argument
elif not drive:
volPath = drive
Apparently the part from the fix with "raise vm.VolumeError(drive)" is missing,
although I'm running a newer version of vdsm. According to the bug report at
https://bugzilla.redhat.com/show_bug.cgi?id=843387
the fix should be in vdsm-4.9.6-29.0 (RHEL6), while I'm running
vdsm-4.10.0-10.fc17.x86_64:
# rpm -q --whatprovides /usr/share/vdsm/clientIF.py
vdsm-4.10.0-10.fc17.x86_64
I must admit that this is oVirt on FC17 and not RHEV on RHEL, so this may
explain the different versions of vdsm.
If it's in there please send the full logs (engine+vdsm) and the
bug might
need to be reopened, otherwise you can just upgrade vdsm and hopefully it
would solve the problem.
I've attached the full logs. It contains all log entries from activating
the ovirt node until trying to start the VM (both engine+vdsm).
Thanks
- Frank