[ovirt-users] Bad volume specification after hung migration
Michal Skrivanek
michal.skrivanek at redhat.com
Thu Oct 26 10:41:19 UTC 2017
> On 26 Oct 2017, at 12:32, Roberto Nunin <robnunin at gmail.com> wrote:
>
> Hi Michael
>
> By frozen I mean the action to put host in maintenance while some VM were running on it.
> This action wasn't completed after more than one hour.
ok, and was the problem in this last VM not finishing the migration? Was it migrating at all? If yes, what was the progress in UI, any failures? There are various timeouts which should have been triggered, so if they were not triggered it would indeed point to some internal issue. Would be great to attach source and destination vdsm.log
> Thinking that shutting down the VM could help, I've done it. Looking at results, not.
What was the result? Did it fail to shut down? Did you use Power Off to force immediate shutdown? If it was migrating, did you try to cancel the migration first?
>
> Yes, I've restarted the ovirt-engine service, I've still not restarted the hosted-engine VM.
well, it’s not only a universal “fix” of various things, sometimes it does more harm than benefit too. Logs would be helpful.
> Hosts still not restarted. Do you think can help ?
hard to say. Either way please salvage logs first
>
> Obviously we will migrate, this activities are enabling us to have redundancy at the storage level, then we will migrate to 4.1.x
great:)
Thanks,
michal
>
> Thanks
>
> 2017-10-26 12:26 GMT+02:00 Michal Skrivanek <michal.skrivanek at redhat.com <mailto:michal.skrivanek at redhat.com>>:
>
>> On 26 Oct 2017, at 10:20, Roberto Nunin <robnunin at gmail.com <mailto:robnunin at gmail.com>> wrote:
>>
>> We are running 4.0.1.1-1.el7.centos
>
> Hi,
> any reason not to upgrade to 4.1?
>
>>
>> After a frozen migration attempt, we have two VM that after shutdown, are not anymore able to be started up again.
>
> what do you mean by frozen? Are you talking about "VM live migration" or “live storage migration”?
> How exactly did you resolve that situation, you only shut down those VMs? No other troubleshooting steps, e.g. restarting engine, hosts, things like that?
>
> Thanks,
> michal
>>
>> Message returned is :
>>
>> Bad volume specification {'index': '0', u'domainID': u'731d95a9-61a7-4c7a-813b-fb1c3dde47ea', 'reqsize': '0', u'format': u'cow', u'optional': u'false', u'address': {u'function': u'0x0', u'bus': u'0x00', u'domain': u'0x0000', u'type': u'pci', u'slot': u'0x05'}, u'volumeID': u'cffc70ff-ed72-46ef-a369-4be95de72260', 'apparentsize': '3221225472', u'imageID': u'3fe5a849-bcc2-42d3-93c5aca4c504515b', u'specParams': {}, u'readonly': u'false', u'iface': u'virtio', u'deviceId': u'3fe5a849bcc2-42d3-93c5-aca4c504515b', 'truesize': '3221225472', u'poolID': u'00000001-0001-0001-0001-0000000001ec', u'device': u'disk', u'shared': u'false', u'propagateErrors': u'off',u'type':u'disk'}
>>
>> Probably this is caused by a wrong pointer into the database that still refer to the migration image-id.
>>
>> If we search within all_disks view, we can find that parentid field isn't 00000000-0000-0000-0000-000000000000 like all other running vm, but it has a value:
>>
>> vm_names | parentid
>> ----------------------+--------------------------------------
>> working01.company.xx | 00000000-0000-0000-0000-000000000000
>> working02.company.xx | 00000000-0000-0000-0000-000000000000
>> working03.company.xx | 00000000-0000-0000-0000-000000000000
>> working04.company.xx | 00000000-0000-0000-0000-000000000000
>> broken001.company.xx | 30533842-2c83-4d0e-95d2-48162dbe23bd <<<<<<<<<
>> working05.company.xx | 00000000-0000-0000-0000-000000000000
>>
>>
>> How we can recover from this ?
>>
>> Thanks in advance
>> Regards,
>>
>> --
>> Roberto
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org <mailto:Users at ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> --
> Roberto
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20171026/d3f97118/attachment.html>
More information about the Users
mailing list