We are running 4.0.1.1-1.el7.centos
After a frozen migration attempt, we have two VM that after shutdown, are
not anymore able to be started up again.
Message returned is :
Bad volume specification {'index': '0', u'domainID':
u'731d95a9-61a7-4c7a-813b-fb1c3dde47ea', 'reqsize': '0',
u'format': u'cow',
u'optional': u'false', u'address': {u'function':
u'0x0', u'bus': u'0x00',
u'domain': u'0x0000', u'type': u'pci', u'slot':
u'0x05'}, u'volumeID':
u'cffc70ff-ed72-46ef-a369-4be95de72260', 'apparentsize':
'3221225472',
u'imageID': u'3fe5a849-bcc2-42d3-93c5aca4c504515b', u'specParams':
{},
u'readonly': u'false', u'iface': u'virtio',
u'deviceId':
u'3fe5a849bcc2-42d3-93c5-aca4c504515b', 'truesize': '3221225472',
u'poolID': u'00000001-0001-0001-0001-0000000001ec', u'device':
u'disk',
u'shared': u'false', u'propagateErrors':
u'off',u'type':u'disk'}
Probably this is caused by a wrong pointer into the database that still
refer to the migration image-id.
If we search within all_disks view, we can find that parentid field
isn't 00000000-0000-0000-0000-000000000000
like all other running vm, but it has a value:
vm_names | parentid
----------------------+--------------------------------------
working01.company.xx | 00000000-0000-0000-0000-000000000000
working02.company.xx | 00000000-0000-0000-0000-000000000000
working03.company.xx | 00000000-0000-0000-0000-000000000000
working04.company.xx | 00000000-0000-0000-0000-000000000000
broken001.company.xx | 30533842-2c83-4d0e-95d2-48162dbe23bd
<<<<<<<<<
working05.company.xx | 00000000-0000-0000-0000-000000000000
How we can recover from this ?
Thanks in advance
Regards,
--
Robert
o