Hello everyone!
We have been working on our testing OVirt cluster today again, for the
first time in a few weeks, and all of the sudden a new problem has cropped
up. VM's that I created weeks ago and had working properly are now no
longer starting. When we try to start one of them, we get this error in
the engine console:
VM CentOS1 is down with error. Exit message: Bad volume specification
{'index': 0, 'iface': 'virtio', 'type': 'disk',
'format': 'raw',
'bootOrder': '1', 'volumeID':
'a737621e-6e66-4cd9-9014-67f7aaa184fb',
'apparentsize': '53687091200', 'imageID':
'702440a9-cd53-4300-8369-28123e8a095e', 'specParams': {},
'readonly':
'false', 'domainID': 'fa2f828c-f98a-4a17-99fb-1ec1f46d018c',
'reqsize':
'0', 'deviceId': '702440a9-cd53-4300-8369-28123e8a095e',
'truesize':
'53687091200', 'poolID': 'a0781e2b-6242-4043-86c2-cd6694688ed2',
'device':
'disk', 'shared': 'false', 'propagateErrors':
'off', 'optional': 'false'}.
Looking at the VDSM log files, I think I've found what's actually
triggering this up, but I honestly do not know how to decipher it - here's
the message:
Thread-418::ERROR::2015-01-09
15:59:57,874::task::863::Storage.TaskManager.Task::(_setError)
Task=`11a740b7-4391-47ab-8575-919bd1e0c3fb`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 870, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3242, in prepareImage
leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
File "/usr/share/vdsm/storage/glusterVolume.py", line 35, in
getVmVolumeInfo
volTrans = VOLUME_TRANS_MAP[volInfo[volname]['transportType'][0]]
KeyError: u'_gf-os'
This is Ovirt 3.5, with a 2-node gluster as the storage domain (no ovirt
stuff running there) , and 5 virtualization nodes, all machines running
CentOS 6.6 installs. We also have the patched RPMs that *should* enable
libgfapi access to gluster, but I can't confirm those are working
properly. The gluster filesystem is mounted on the virtualization node:
gf-os01-ib:/gf-os on /rhev/data-center/mnt/glusterSD/gf-os01-ib:_gf-os type
fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Anyone got any ideas? More logs available upon request.
Show replies by date