[Users] oVirt 3.1 - VM Migration Issue

Roy Golan rgolan at redhat.com
Sun Jan 6 16:54:33 EST 2013


On 01/03/2013 05:07 PM, Tom Brown wrote:
>
>> interesting, please search for migrationCreate command on desination host and search for ERROR afterwords, what do you see?
>>
>> ----- Original Message -----
>>> From: "Tom Brown" <tom at ng23.net>
>>> To: users at ovirt.org
>>> Sent: Thursday, January 3, 2013 4:12:05 PM
>>> Subject: [Users] oVirt 3.1 - VM Migration Issue
>>>
>>>
>>> Hi
>>>
>>> I seem to have an issue with a single VM and migration, other VM's
>>> can migrate OK - When migrating from the GUI it appears to just hang
>>> but in the engine.log i see the following
>>>
>>> 2013-01-03 14:03:10,359 INFO  [org.ovirt.engine.core.bll.VdsSelector]
>>> (ajp--0.0.0.0-8009-59) Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:10,411 INFO
>>> [org.ovirt.engine.core.bll.MigrateVmToServerCommand]
>>> (pool-3-thread-48) [4d32917d] Running command:
>>> MigrateVmToServerCommand internal: false. Entities affected :  ID:
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 Type: VM
>>> 2013-01-03 14:03:10,413 INFO  [org.ovirt.engine.core.bll.VdsSelector]
>>> (pool-3-thread-48) [4d32917d] Checking for a specific VDS only -
>>> id:a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> name:ovirt-node.domain-name, host_name(ip):10.192.42.165
>>> 2013-01-03 14:03:11,028 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 5011789b
>>> 2013-01-03 14:03:11,030 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] VdsBroker::migrate::Entered
>>> (vm_guid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806,
>>> srcHost=10.192.42.196, dstHost=10.192.42.165:54321,  method=online
>>> 2013-01-03 14:03:11,031 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] START, MigrateBrokerVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806, srcHost=10.192.42.196,
>>> dstVdsId=a2d84a1e-3e18-11e2-8851-3cd92b4c8e89,
>>> dstHost=10.192.42.165:54321, migrationMethod=ONLINE), log id:
>>> 7cd53864
>>> 2013-01-03 14:03:11,041 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateBrokerVDSCommand, log
>>> id: 7cd53864
>>> 2013-01-03 14:03:11,086 INFO
>>> [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand]
>>> (pool-3-thread-48) [4d32917d] FINISH, MigrateVDSCommand, return:
>>> MigratingFrom, log id: 5011789b
>>> 2013-01-03 14:03:11,606 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-29) vds::refreshVmList vm id
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 is migrating to vds
>>> ovirt-node.domain-name ignoring it in the refresh till migration is
>>> done
>>> 2013-01-03 14:03:12,836 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) VM test002.domain-name
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 moved from MigratingFrom --> Up
>>> 2013-01-03 14:03:12,837 INFO
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) adding VM
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 to re-run list
>>> 2013-01-03 14:03:12,852 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
>>> (QuartzScheduler_Worker-36) Rerun vm
>>> 9dc63ce4-0f76-4963-adfe-6f8eb1a44806. Called from vds
>>> ovirt-node002.domain-name
>>> 2013-01-03 14:03:12,855 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) START, MigrateStatusVDSCommand(vdsId =
>>> 1a52b722-43a1-11e2-af96-3cd92b4c8e89,
>>> vmId=9dc63ce4-0f76-4963-adfe-6f8eb1a44806), log id: 4721a1f3
>>> 2013-01-03 14:03:12,864 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Failed in MigrateStatusVDS method
>>> 2013-01-03 14:03:12,865 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Error code migrateErr and error message
>>> VDSGenericException: VDSErrorException: Failed to MigrateStatusVDS,
>>> error = Fatal error during migration
>>> 2013-01-03 14:03:12,865 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Command
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand
>>> return value
>>> Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
>>> mStatus                       Class Name:
>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
>>> mCode                         12
>>> mMessage                      Fatal error during migration
>>>
>>>
>>> 2013-01-03 14:03:12,866 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
>>> (pool-3-thread-48) Vds: ovirt-node002.itvonline.ads
>>> 2013-01-03 14:03:12,867 ERROR
>>> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-48)
>>> Command MigrateStatusVDS execution failed. Exception:
>>> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
>>> MigrateStatusVDS, error = Fatal error during migration
>>> 2013-01-03 14:03:12,867 INFO
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
>>> (pool-3-thread-48) FINISH, MigrateStatusVDSCommand, log id: 4721a1f3
>>>
>>> Does anyone have any idea what this might be? I am using 3.1 from
>>> dreyou as these are CentOS 6 nodes
>>>
> any clue on which log on the new host ? I see the following in messages
VDSM is the virtualization agent. look at /var/log/vdsm/vdsm.log
>
> Jan  3 16:03:20 ovirt-node vdsm Storage.LVM WARNING lvm vgs failed: 5 [] ['  Volume group "ab686999-f320-4a61-ae07-e99c2f858996" not found']
> Jan  3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_imageNS already registered
> Jan  3 16:03:20 ovirt-node vdsm Storage.StorageDomain WARNING Resource namespace ab686999-f320-4a61-ae07-e99c2f858996_volumeNS already registered
> Jan  3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel0', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '1'}}' found
> Jan  3 16:03:58 ovirt-node vdsm vm.Vm WARNING vmId=`9dc63ce4-0f76-4963-adfe-6f8eb1a44806`::Unknown type found, device: '{'device': 'unix', 'alias': 'channel1', 'type': 'channel', 'address': {'bus': '0', 'controller': '0', 'type': 'virtio-serial', 'port': '2'}}' found
> Jan  3 16:03:59 ovirt-node kernel: device vnet2 entered promiscuous mode
> Jan  3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering forwarding state
> Jan  3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
> Jan  3 16:03:59 ovirt-node kernel: device vnet2 left promiscuous mode
> Jan  3 16:03:59 ovirt-node kernel: ovirtmgmt: port 4(vnet2) entering disabled state
>
> and the following in the qemu log for that VM on the new node
>
> 2013-01-03 16:03:59.706+0000: starting up
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -S -M rhel6.3.0 -cpu Nehalem -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name test002.itvonline.ads -uuid 9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=6-3.el6.centos.9,serial=55414E03-C241-11DF-BBDA-64093408D485_d4:85:64:09:34:08,uuid=9dc63ce4-0f76-4963-adfe-6f8eb1a44806 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/test002.itvonline.ads.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-01-03T16:03:58,driftfix=slew -no-shutdown -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 -drive file=/rhev/data-center/bb0beebf-edab-41e2-83b8-16bdbbc5
>   dda7/2a1939bd-9fa3-4896-b8a9-46234172aae7/images/e8711e5d-2f06-4c0f-b5c6-fa0806d7448f/0d93c51f-f838-4143-815c-9b3457d1a934,if=none,id=drive-virtio-disk0,format=raw,serial=e8711e5d-2f06-4c0f-b5c6-fa0806d7448f,cache=none,werror=stop,rerror=stop,aio=threads -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,fd=32,id=hostnet0,vhost=on,vhostfd=33 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:c0:2a:00,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/test002.itvonline.ads.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev pty,id=charconsole0 -de
>   vice virtconsole,chardev=charconsole0,id=console0 -device usb-tablet,id=input0 -vnc 10.192.42.165:4,password -k en-us -vga qxl -global qxl-vga.vram_size=67108864 -incoming tcp:0.0.0.0:49160 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> 2013-01-03 16:03:59.955+0000: shutting down
>
> but thats about it?
>
> thanks
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users




More information about the Users mailing list