I don't see a reason why open monitor will fail migration - at
most, if
there is a problem I would close the spice session on src and restarted it
at the dst.
can you please attach vdsm/libvirt/qemu logs from both hosts and engine
logs so that we can see the migration failure reason?
Thanks,
Dafna
On 03/03/2014 05:16 PM, Ted Miller wrote:
> I just got my Data Center running again, and am proceeding with some setup
> & testing.
>
> I created a VM (not doing anything useful)
> I clicked on the "Console" and had a SPICE console up (viewed in Win7).
> I had it printing the time on the screen once per second (while date;do
> sleep 1; done).
> I tried to migrate the VM to another host and got in the GUI:
>
> Migration started (VM: web1, Source: s1, Destination: s3, User:
> admin@internal).
>
> Migration failed due to Error: Fatal error during migration (VM: web1,
> Source: s1, Destination: s3).
>
> As I started the migration I happened to think "I wonder how they handle
> the SPICE console, since I think that is a link from the host to my
> machine, letting me see the VM's screen."
>
> After the failure, I tried shutting down the SPICE console, and found that
> the migration succeeded. I again opened SPICE and had a migration fail.
> Closed SPICE, migration failed.
>
> I can understand how migrating SPICE is a problem, but, at least could we
> give the victim of this condition a meaningful error message? I have seen
> a lot of questions about failed migrations (mostly due to attached CDs),
> but I have never seen this discussed. If I had not had that particular
> thought cross my brain at that particular time, I doubt that SPICE would
> have been where I went looking for a solution.
>
> If this is the first time this issue has been raised, I am willing to file
> a bug.
>
> Ted Miller
> Elkhart, IN, USA
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
In finding the right one-minute slice of the logs, I saw something that makes
me think this is due to a missing method in the glusterfs support. Others
who understand more of what the logs are saying can verify or correct my hunch.
Was trying to migrate from s2 to s1.
Logs on