--uAKRQypu60I7Lcqm
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
On 02/04, Francesco Romani wrote:
----- Original Message -----
> From: "David Caro" <dcaroest(a)redhat.com>
> To: devel(a)ovirt.org
> Sent: Wednesday, February 4, 2015 11:21:39 AM
> Subject: [ovirt-devel] Help with issues with migration
>=20
>=20
> Hi!
>=20
> Upstream phoenix lab has stabilized, but after the outages we are findi=
ng
> some
> issues probably caused by it.
>=20
> One of them is that th vm migration is not working, ew have two hosts s=
rv05
> and
> srv06, and we want to migrate vms from 06 to 05 but we find those error=
s on
> vdsm on 05:
>=20
>=20
> Feb 04 02:56:50 ovirt-srv05 vdsm[5170]: vdsm vm.Vm WARNING
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::Unknown type found, de=
vice:
> '{'device': 'unix', 'alias':
'channel0', 'type': 'channel', 'address':
> {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '1'}=
}'
> found
> Feb 04 02:56:50 ovirt-srv05 vdsm[5170]: vdsm vm.Vm WARNING
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::Unknown type found, de=
vice:
> '{'device': 'unix', 'alias':
'channel1', 'type': 'channel', 'address':
> {'bus': '0', 'controller': '0', 'type':
'virtio-serial', 'port': '2'}=
}'
> found
> Feb 04 02:56:50 ovirt-srv05 vdsm[5170]: vdsm vm.Vm WARNING
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::Unknown type found, de=
vice:
> '{'device': 'spicevmc', 'alias':
'channel2', 'type': 'channel', 'addr=
ess':
> {'bus': '0', 'controller':
'0', 'type': 'virtio-serial', 'port': '3'}=
}'
> found
> Feb 04 02:56:50 ovirt-srv05 vdsm[5170]: vdsm vm.Vm ERROR
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::Alias not found for de=
vice
> type graphics during migration at destination host
=20
All of these are mostly noise. We have BZs basically to silence them, but=
they
are harmless.
=20
> After that sanlock complains:
> Feb 04 02:56:50 ovirt-srv05 sanlock[1055]: 2015-02-04 02:56:50-0700 1=
453
> [1055]: cmd 9 target pid 6479 not found
=20
There is a BZ for this also, but likely not the root cause here
=20
> but it seems that the vm is starting up:
> Feb 04 02:56:50 ovirt-srv05 systemd[1]: Starting Virtual Machine
> qemu-el6-vm03-phx-ovirt-org.
> -- Subject: Unit machine-qemu\x2del6\x2dvm03\x2dphx\x2dovirt\x2dorg.s=
cope
> has begun with start-up
> -- Defined-By: systemd
> -- Support:
http://lists.freedesktop.org/mailman/listinfo/systemd-dev= el
> --
> -- Unit machine-qemu\x2del6\x2dvm03\x2dphx\x2dovirt\x2dorg.scope has =
begun
> starting up.
> Feb 04 02:56:50 ovirt-srv05 systemd-machined[5642]: New machine
> qemu-el6-vm03-phx-ovirt-org.
> -- Subject: A virtual machine or container has been started
> -- Defined-By: systemd
> -- Support:
http://lists.freedesktop.org/mailman/listinfo/systemd-dev= el
> --
> -- The virtual machine qemu-el6-vm03-phx-ovirt-org with its leader PI=
D
6479
> has been
> -- started is now ready to use.
> Feb 04 02:56:50 ovirt-srv05 systemd[1]: Started Virtual Machine
> qemu-el6-vm03-phx-ovirt-org.
> -- Subject: Unit machine-qemu\x2del6\x2dvm03\x2dphx\x2dovirt\x2dorg.s=
cope
> has finished start-up
> -- Defined-By: systemd
> -- Support:
http://lists.freedesktop.org/mailman/listinfo/systemd-dev= el
> --
> -- Unit machine-qemu\x2del6\x2dvm03\x2dphx\x2dovirt\x2dorg.scope has
> finished starting up.
> --
> -- The start-up result is done.
>=20
>=20
> But it shuts down:
> Feb 04 02:56:51 ovirt-srv05 systemd-machined[5642]: Machine
> qemu-el6-vm03-phx-ovirt-org terminated.
> -- Subject: A virtual machine or container has been terminated
> -- Defined-By: systemd
> -- Support:
http://lists.freedesktop.org/mailman/listinfo/systemd-dev= el
> --
> -- The virtual machine qemu-el6-vm03-phx-ovirt-org with its leader PI=
D
6479
> has been
> -- shut down.
> Feb 04 02:56:51 ovirt-srv05 vdsm[5170]: vdsm vm.Vm ERROR
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::Failed to start a migr=
ation
> destination vm
> Traceback (most recent call las=
t):
> File
"/usr/share/vdsm/virt/vm=
=2Epy",
> line 2298, in
_startUnderlyin=
gVm
>
self._completeIncomingMigra=
tion()
> File
"/usr/share/vdsm/virt/vm=
=2Epy",
> line 4107, in
> _completeIncomingMigration
> self._incomingMigrationFini=
shed.isSet(),
> > usedTimeout)
> File
"/usr/share/vdsm/virt/vm=
=2Epy",
> line 4160, in
> _attachLibvirtDomainAfterMigr=
ation
> raise
> MigrationError(e.get_error_=
message())
> MigrationError: Domain
not foun=
d: no
> domain with matching
uuid
> '43415276-a7bf-4c86-b0e9-70a5f6=
d39a40'
> Feb 04 02:56:51 ovirt-srv05 vdsm[5170]: vdsm root WARNING
File:
> /var/lib/libvirt/qemu/channels/43415276-a7bf-4c86-b0e9-70a5f6d39a40.c=
om.redhat.rhevm.vdsm
> already removed
> Feb 04 02:56:51 ovirt-srv05 vdsm[5170]: vdsm root WARNING File:
> /var/lib/libvirt/qemu/channels/43415276-a7bf-4c86-b0e9-70a5f6d39a40.o=
rg.qemu.guest_agent.0
> already removed
> Feb 04 02:56:51 ovirt-srv05 vdsm[5170]: vdsm vm.Vm WARNING
> vmId=3D`43415276-a7bf-4c86-b0e9-70a5f6d39a40`::trying to set state to=
Down
> when already Down
=20
This just tells us that QEMU failed to run in the dest host. Any more inf=
ormations
in the QEMU logs and/or related to libvirt?
After upgrading all the hosts the migration is working again, I suppose that
there was something in the storage locked after the dirty outage that just
needed a cleanup....
I did not see any extra traces on any libvirt logs then, and I can't reprod=
uce
now :/
=20
Bests,
=20
--=20
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
--=20
David Caro
Red Hat S.L.
Continuous Integration Engineer - EMEA ENG Virtualization R&D
Tel.: +420 532 294 605
Email: dcaro(a)redhat.com
Web:
www.redhat.com
RHT Global #: 82-62605
--uAKRQypu60I7Lcqm
Content-Type: application/pgp-signature
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJU0nduAAoJEEBxx+HSYmnDmdQIAINacxZATdC3XOsj7HSjWZJ3
DpcwUHn7RutYmCSjIRuaZXCUpJsF6puGQdvia9D4erywN/WOpD4tUyXEDPrHs3pC
jS7nOZGOA9n+PCrEMS7Hz0QF8sCVCzeWm/EbsAXFEpkCQZYjlBR0HLPV4989l8rU
Y5a0zzXUl4lajrVuAT3BsJcYwjciK9igwyX3rR/5SpS2+KVXMgPokfmnGRLQZfK8
wzk9w+axGO197fzRLFsMVSfHZpPIBnWSM892KE6DwOmDUHo2l8Gmp195hr6PsYIa
FmQm/mSv6bEM4OffpqlOnXYibimOmJJ7v27wgNgHyUTIJpIECEPeERU0BQCQ69c=
=sWzV
-----END PGP SIGNATURE-----
--uAKRQypu60I7Lcqm--