
----- Original Message -----
From: "Dan Kenigsberg" <danken@redhat.com> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com> Cc: "users" <users@ovirt.org>, "Michal Skrivanek" <mskrivan@redhat.com> Sent: Tuesday, October 8, 2013 11:40:25 AM Subject: Re: [Users] Migration issues with ovirt 3.3
On Tue, Oct 08, 2013 at 01:44:47AM +0200, Gianluca Cecchi wrote:
On Mon, Oct 7, 2013 at 2:59 AM, Dan Kenigsberg wrote:
Would you please test if http://gerrit.ovirt.org/19906 solves the issue? (I haven't. Too late at night.)
Regards, Dan.
I can confirm that it resolves https://bugzilla.redhat.com/show_bug.cgi?id=1007980
so now I'm able to start VM without having to select run once and attaching a cd iso (note that is only valid for newly created VMs though)
Yes. Old VMs are trashed with the bogus address reproted by the buggy Vdsm. Can someone from engine supply a script to clear all device addresses from the VM database table?
you can use this line (assuming engine as db and user), make sure only 'bad' vms return: psql -U engine -d engine -c "select distinct vm_name from vm_static, vm_device where vm_guid=vm_id and device='cdrom' and address ilike '%pci%';" if so, you can run this to clear the address field for them, so they could run again: psql -U engine -d engine -c "update vm_device set address='' where device='cdrom' and address ilike '%pci%';"
But migration still fails
It seems like an unrelated failure. I do not know what's blocking migration traffic. Could you see if libvirtd.log and qemu logs at source and destinaiton have clues?
Thread-4644::DEBUG::2013-10-08 01:20:53,933::vm::360::vm.Vm::(_startUnderlyingMigration) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration to qemu+tls://10.4.4.58/system with miguri tcp://10.4.4.58
Thread-4645::DEBUG::2013-10-08 01:20:53,934::vm::718::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread started Thread-4646::DEBUG::2013-10-08 01:20:53,935::vm::756::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::starting migration monitor thread Thread-4648::DEBUG::2013-10-08 01:20:54,321::BindingXMLRPC::979::vds::(wrapper) client [10.4.4.60]::call volumesList with () {} Thread-4648::DEBUG::2013-10-08 01:20:54,349::BindingXMLRPC::986::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'gviso': {'transportType': ['TCP'], 'uuid': 'c8cbcac7-1d40-4cee-837d-bb97467fb2bd', 'bricks': ['f18ovn01.mydomain:/gluster/ISO_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/ISO_GLUSTER/brick1'], 'volumeName': 'gviso', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'storage.owner-gid': '36', 'storage.owner-uid': '36', 'server.allow-insecure': 'on'}}, 'gvdata': {'transportType': ['TCP'], 'uuid': 'ed71a4c2-6205-4aad-9aab-85da086d5ba3', 'bricks': ['f18ovn01.mydomain:/gluster/DATA_GLUSTER/brick1', 'f18ovn03.mydomain:/gluster/DATA_GLUSTER/brick1'], 'volumeName': 'gvdata', 'volumeType': 'REPLICATE', 'replicaCount': '2', 'brickCount': '2', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'options': {'server.allow-insecure': 'on', 'storage.owner-uid': '36', 'storage.owner-gid': '36'}}}} Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::94::libvirtconnection::(wrapper) connection to libvirt broken. ecode: 38 edom: 7 Thread-4644::ERROR::2013-10-08 01:20:54,873::libvirtconnection::96::libvirtconnection::(wrapper) taking calling process down. MainThread::DEBUG::2013-10-08 01:20:54,874::vdsm::45::vds::(sigtermHandler) Received signal 15 Thread-4644::DEBUG::2013-10-08 01:20:54,874::vm::733::vm.Vm::(cancel) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::canceling migration downtime thread Thread-4644::DEBUG::2013-10-08 01:20:54,875::vm::803::vm.Vm::(stop) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::stopping migration monitor thread Thread-4645::DEBUG::2013-10-08 01:20:54,875::vm::730::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::migration downtime thread exiting Thread-4644::ERROR::2013-10-08 01:20:54,875::vm::244::vm.Vm::(_recover) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Cannot recv data: Input/output error Thread-4644::ERROR::2013-10-08 01:20:55,008::vm::324::vm.Vm::(run) vmId=`d54660a2-45ed-41ae-ab99-a6f93ebbdbb1`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 311, in run self._startUnderlyingMigration() File "/usr/share/vdsm/vm.py", line 388, in _startUnderlyingMigration None, maxBandwidth) File "/usr/share/vdsm/vm.py", line 826, in f ret = attr(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1253, in migrateToURI2 if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', dom=self) libvirtError: Cannot recv data: Input/output error
Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 192.168.3.3 0.0.0.0/0 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321 ...
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users