[ovirt-users] Mirgration issues

Simone Tiraboschi stirabos at redhat.com
Thu Mar 2 08:32:18 UTC 2017


Please take care that directly moving away or restarting VMs under oVirt
hood could cause conflicts with potentially dangerous results:
https://bugzilla.redhat.com/show_bug.cgi?id=1419649#c11

On Wed, Mar 1, 2017 at 9:40 AM, Arman Khalatyan <arm2arm at gmail.com> wrote:

> to use virsh you can set password:
>
>  saslpasswd2 -a libvirt foo (foo is some username)
>
> then use virsh with that username as usual
>
>
> On Thu, Feb 23, 2017 at 8:02 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
> wrote:
>
>> Hi All,
>>
>>
>>
>> so sorry for not seeing this. I used vdsCLient just because I didn’t know
>> how to get virsh working. (It asked for user/password) Now that I managed
>> to get this working things look different
>>
>>
>>
>> [root at ovirt-node02 ~]# vdsClient -s localhost list table
>>
>> e051b38c-fd63-40f0-8d64-26c12ff7b880  41294  HostedEngine
>> Up                   172.16.1.9
>>
>> [root at ovirt-node02 ~]# virsh -r list --all
>>
>>  Id    Name                           State
>>
>> ----------------------------------------------------
>>
>> 1     HostedEngine                   running
>>
>> -     data_p                               shut off
>>
>>
>>
>> How could this vm persist over reboots and reinstalls? And how would I
>> remove this ?
>>
>>
>>
>>
>>
>> *Von:* Michal Skrivanek [mailto:michal.skrivanek at redhat.com]
>> *Gesendet:* Donnerstag, 23. Februar 2017 17:35
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>; Martin Sivak <
>> msivak at redhat.com>
>> *Cc:* Arman Khalatyan <arm2arm at gmail.com>; users <users at ovirt.org>
>>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>>
>>
>> On 23 Feb 2017, at 15:04, Sven Achtelik <Sven.Achtelik at eps.aero> wrote:
>>
>>
>>
>> Did that twice and it didn’t change anything.
>>
>>
>>
>> *Von:* Arman Khalatyan [mailto:arm2arm at gmail.com <arm2arm at gmail.com>]
>> *Gesendet:* Donnerstag, 23. Februar 2017 15:02
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* Yanir Quinn <yquinn at redhat.com>; users <users at ovirt.org>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> engine gui.
>>
>>
>>
>> On Thu, Feb 23, 2017 at 1:46 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Do you mean just reinstalling from the Engine gui or reinstalling it
>> completely including the OS?
>>
>>
>>
>> *Von:* Arman Khalatyan [mailto:arm2arm at gmail.com]
>> *Gesendet:* Donnerstag, 23. Februar 2017 13:45
>>
>>
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* Yanir Quinn <yquinn at redhat.com>; users <users at ovirt.org>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> Just random thought: try to reinstall the "bad" host:)
>>
>>
>>
>> On Thu, Feb 23, 2017 at 1:32 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Yes, all hosts are identical and going through the values there is no
>> difference between them.
>>
>>
>>
>> [root at ovirt-node02 ~]# systemctl status vdsmd -l
>>
>> ● vdsmd.service - Virtual Desktop Server Manager
>>
>>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>> vendor preset: enabled)
>>
>>    Active: active (running) since Tue 2017-02-21 08:00:54 CST; 1 day 22h
>> ago
>>
>>   Process: 3571 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
>> --pre-start (code=exited, status=0/SUCCESS)
>>
>> Main PID: 3742 (vdsm)
>>
>>    CGroup: /system.slice/vdsmd.service
>>
>>            ├─3742 /usr/bin/python2 /usr/share/vdsm/vdsm
>>
>>            ├─4021 /usr/libexec/ioprocess --read-pipe-fd 57
>> --write-pipe-fd 56 --max-threads 10 --max-queued-requests 10
>>
>>            ├─4444 /usr/libexec/ioprocess --read-pipe-fd 41
>> --write-pipe-fd 40 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5120 /usr/libexec/ioprocess --read-pipe-fd 71
>> --write-pipe-fd 70 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5232 /usr/libexec/ioprocess --read-pipe-fd 79
>> --write-pipe-fd 78 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5533 /usr/libexec/ioprocess --read-pipe-fd 87
>> --write-pipe-fd 86 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5576 /usr/libexec/ioprocess --read-pipe-fd 109
>> --write-pipe-fd 108 --max-threads 10 --max-queued-requests 10
>>
>>            └─5589 /usr/libexec/ioprocess --read-pipe-fd 116
>> --write-pipe-fd 114 --max-threads 10 --max-queued-requests 10
>>
>>
>>
>> Feb 23 06:22:50 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36484, 0, 0) at 0x2548518>: unexpected eof
>>
>> Feb 23 06:22:50 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36486, 0, 0) at 0x2b7bd88>: unexpected eof
>>
>> Feb 23 06:22:53 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36490, 0, 0) at 0x25cb2d8>: unexpected eof
>>
>> Feb 23 06:22:54 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36494, 0, 0) at 0x2a3b878>: unexpected eof
>>
>> Feb 23 06:22:54 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36492, 0, 0) at 0x2a3b200>: unexpected eof
>>
>> Feb 23 06:22:55 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36496, 0, 0) at 0x2dcf488>: unexpected eof
>>
>> Feb 23 06:22:58 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36498, 0, 0) at 0x2b2eb00>: unexpected eof
>>
>> Feb 23 06:22:59 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36500, 0, 0) at 0x25bb908>: unexpected eof
>>
>> Feb 23 06:23:04 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36504, 0, 0) at 0x2b8bc20>: unexpected eof
>>
>> Feb 23 06:23:04 ovirt-node02.mgmt.lan.company.lan vdsm[3742]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 36506, 0, 0) at 0x277a680>: unexpected eof
>>
>>
>>
>>
>>
>> [root at ovirt-node01 ~]#  systemctl status vdsmd -l
>>
>> ● vdsmd.service - Virtual Desktop Server Manager
>>
>>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>> vendor preset: enabled)
>>
>>    Active: active (running) since Tue 2017-02-21 06:31:51 CST; 1 day 23h
>> ago
>>
>>   Process: 4180 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
>> --pre-start (code=exited, status=0/SUCCESS)
>>
>> Main PID: 4251 (vdsm)
>>
>>    CGroup: /system.slice/vdsmd.service
>>
>>            ├─4251 /usr/bin/python2 /usr/share/vdsm/vdsm
>>
>>            ├─4513 /usr/libexec/ioprocess --read-pipe-fd 65
>> --write-pipe-fd 64 --max-threads 10 --max-queued-requests 10
>>
>>            ├─4926 /usr/libexec/ioprocess --read-pipe-fd 40
>> --write-pipe-fd 39 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5639 /usr/libexec/ioprocess --read-pipe-fd 59
>> --write-pipe-fd 57 --max-threads 10 --max-queued-requests 10
>>
>>            ├─5751 /usr/libexec/ioprocess --read-pipe-fd 79
>> --write-pipe-fd 76 --max-threads 10 --max-queued-requests 10
>>
>>            ├─6061 /usr/libexec/ioprocess --read-pipe-fd 87
>> --write-pipe-fd 86 --max-threads 10 --max-queued-requests 10
>>
>>            ├─6109 /usr/libexec/ioprocess --read-pipe-fd 116
>> --write-pipe-fd 115 --max-threads 10 --max-queued-requests 10
>>
>>            └─6118 /usr/libexec/ioprocess --read-pipe-fd 126
>> --write-pipe-fd 125 --max-threads 10 --max-queued-requests 10
>>
>>
>>
>> Feb 23 06:28:53 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45712, 0, 0) at 0x324eab8>: unexpected eof
>>
>> Feb 23 06:28:53 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45714, 0, 0) at 0x2c5dea8>: unexpected eof
>>
>> Feb 23 06:28:57 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45716, 0, 0) at 0x7f2df4ade5f0>: unexpected eof
>>
>> Feb 23 06:29:03 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45718, 0, 0) at 0x7f2df4ade4d0>: unexpected eof
>>
>> Feb 23 06:29:03 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45720, 0, 0) at 0x324ed40>: unexpected eof
>>
>> Feb 23 06:29:04 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45722, 0, 0) at 0x324e290>: unexpected eof
>>
>> Feb 23 06:29:06 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45724, 0, 0) at 0x2c5def0>: unexpected eof
>>
>> Feb 23 06:29:07 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45726, 0, 0) at 0x2c5d200>: unexpected eof
>>
>> Feb 23 06:29:11 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45730, 0, 0) at 0x178b680>: unexpected eof
>>
>> Feb 23 06:29:11 ovirt-node01.mgmt.lan.company.lan vdsm[4251]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 45728, 0, 0) at 0x2e38170>: unexpected eof
>>
>>
>>
>> [root at ovirt-node03 ~]# systemctl status vdsmd -l
>>
>> ● vdsmd.service - Virtual Desktop Server Manager
>>
>>    Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled;
>> vendor preset: enabled)
>>
>>    Active: active (running) since Tue 2017-02-21 08:22:06 CST; 1 day 22h
>> ago
>>
>>   Process: 3686 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh
>> --pre-start (code=exited, status=0/SUCCESS)
>>
>> Main PID: 3788 (vdsm)
>>
>>    CGroup: /system.slice/vdsmd.service
>>
>>            ├─ 3788 /usr/bin/python2 /usr/share/vdsm/vdsm
>>
>>            ├─ 4024 /usr/libexec/ioprocess --read-pipe-fd 57
>> --write-pipe-fd 56 --max-threads 10 --max-queued-requests 10
>>
>>            ├─ 4395 /usr/libexec/ioprocess --read-pipe-fd 40
>> --write-pipe-fd 39 --max-threads 10 --max-queued-requests 10
>>
>>            ├─14254 /usr/libexec/ioprocess --read-pipe-fd 104
>> --write-pipe-fd 103 --max-threads 10 --max-queued-requests 10
>>
>>            ├─22710 /usr/libexec/ioprocess --read-pipe-fd 72
>> --write-pipe-fd 71 --max-threads 10 --max-queued-requests 10
>>
>>            ├─22823 /usr/libexec/ioprocess --read-pipe-fd 80
>> --write-pipe-fd 79 --max-threads 10 --max-queued-requests 10
>>
>>            ├─23090 /usr/libexec/ioprocess --read-pipe-fd 87
>> --write-pipe-fd 85 --max-threads 10 --max-queued-requests 10
>>
>>            ├─23130 /usr/libexec/ioprocess --read-pipe-fd 106
>> --write-pipe-fd 98 --max-threads 10 --max-queued-requests 10
>>
>>            └─23144 /usr/libexec/ioprocess --read-pipe-fd 115
>> --write-pipe-fd 114 --max-threads 10 --max-queued-requests 10
>>
>>
>>
>> Feb 23 06:29:56 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39902, 0, 0) at 0x33682d8>: unexpected eof
>>
>> Feb 23 06:29:58 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39904, 0, 0) at 0x3368368>: unexpected eof
>>
>> Feb 23 06:29:59 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39906, 0, 0) at 0x1ee5488>: unexpected eof
>>
>> Feb 23 06:30:00 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39908, 0, 0) at 0x314b320>: unexpected eof
>>
>> Feb 23 06:30:00 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39910, 0, 0) at 0x314b170>: unexpected eof
>>
>> Feb 23 06:30:09 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39912, 0, 0) at 0x37772d8>: unexpected eof
>>
>> Feb 23 06:30:10 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39914, 0, 0) at 0x37772d8>: unexpected eof
>>
>> Feb 23 06:30:11 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39918, 0, 0) at 0x3140e18>: unexpected eof
>>
>> Feb 23 06:30:11 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39916, 0, 0) at 0x3050e60>: unexpected eof
>>
>> Feb 23 06:30:13 ovirt-node03.mgmt.lan.company.lan vdsm[3788]: vdsm
>> vds.dispatcher ERROR SSL error receiving from <yajsonrpc.betterAsyncore.Dispatcher
>> connected ('::1', 39920, 0, 0) at 0x1ec1bd8>: unexpected eof
>>
>>
>>
>>
>>
>> I’ve setup the cluster with 4.05 and the SSL error came up later I think.
>> I couldn’t find anything to solve this so far. I remember reading a bug
>> report about it and that it’s not fixed yet.
>>
>> *Von:* Arman Khalatyan [mailto:arm2arm at gmail.com]
>> *Gesendet:* Donnerstag, 23. Februar 2017 13:22
>>
>>
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* Yanir Quinn <yquinn at redhat.com>; users <users at ovirt.org>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> if you check the hosts capabilities are they same??
>>
>> what about systemctl status vdsmd -l ?
>>
>>
>>
>> On Thu, Feb 23, 2017 at 12:05 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Yes I did but only temporary for a test. I wanted to see if that might
>> solve the issue.
>>
>>
>>
>> I have rebooted all hosts and made sure that selinux is enforcing. I also
>> had a chance to shut down and restart the vm. The issue is still the same,
>> I can’t migrate it to host 2 even after a clean reboot with nothing running
>> on host 02.
>>
>>
>>
>>
>>
>> *Von:* Arman Khalatyan [mailto:arm2arm at gmail.com]
>> *Gesendet:* Donnerstag, 23. Februar 2017 11:59
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* Yanir Quinn <yquinn at redhat.com>; users <users at ovirt.org>
>>
>>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> Did you disable selinux? this can be a reason.
>>
>>
>>
>> On Thu, Feb 23, 2017 at 10:31 AM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Hi Yanir,
>>
>>
>>
>> the hosts are all shown as green and working in the Hosts tab. And I can
>> migrate that vm to host 03. Just 02 is not working.
>>
>>
>>
>> Hosted Engine Information is also showing good on all hosts.
>>
>>
>>
>> *Von:* Yanir Quinn [mailto:yquinn at redhat.com]
>> *Gesendet:* Mittwoch, 22. Februar 2017 11:21
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* Fred Rolland <frolland at redhat.com>; users <users at ovirt.org>
>>
>>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> I can spot in engine.log that ovirt-node03.mgmt.lan.company.lan (if it
>> is your host 3)  is being filtered out when trying to migrate the VM :
>>
>> 2017-02-21 04:32:33,618-06 INFO  [org.ovirt.engine.core.bll.MigrateVmCommand]
>> (org.ovirt.thread.pool-7-thread-44) [8ff8601b-238b-4565-b3bf-de6211cb4685]
>> Running command: MigrateVmCommand internal: false. Entitie
>> s affected :  ID: e051b38c-fd63-40f0-8d64-26c12ff7b880 Type: VMAction
>> group MIGRATE_VM with role type USER
>> 2017-02-21 04:32:33,627-06 INFO  [org.ovirt.engine.core.bll.scheduling.SchedulingManager]
>> (org.ovirt.thread.pool-7-thread-44) [8ff8601b-238b-4565-b3bf-de6211cb4685]
>> Candidate host 'ovirt-node03.mgmt.lan.company.
>> lan' ('9b0feba5-d9a0-491e-b2c2-0742d30af304') was filtered out by
>> 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id:
>> 8ff8601b-238b-4565-b3bf-de6211cb4685)
>>
>> I suggest you first check if the host is functioning correctly.
>>
>>
>>
>> Regards,
>>
>> Yanir Quinn
>>
>>
>>
>>
>>
>> On Tue, Feb 21, 2017 at 3:04 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Hi,
>>
>>
>>
>> there is a VM running, but not that one.
>>
>>
>>
>> [root at ovirt-node02 log]#  vdsClient -s localhost list table
>>
>> 2e0e0da8-eaa5-44ee-8f11-f1297d149be3  14551  NAME
>> Up                   10.6.0.181
>>
>>
>>
>> I even tried that after restarting host 2 and at this point I’m sure
>> there were no VMs running.
>>
>>
>>
>> it’s may be a bit different if you look at that via libvirt
>>
>> does it really not show anything else even when you do
>>
>> "virsh -r list --all” ?
>>
>>
>>
>> Martin, what does “HA” filer do?
>>
>>
>>
>>
>>
>> *Von:* Fred Rolland [mailto:frolland at redhat.com]
>> *Gesendet:* Dienstag, 21. Februar 2017 13:59
>> *An:* Sven Achtelik <Sven.Achtelik at eps.aero>
>> *Cc:* users <users at ovirt.org>
>> *Betreff:* Re: [ovirt-users] Mirgration issues
>>
>>
>>
>> I see the following in the source VDSM log :
>>
>> 2017-02-21 05:53:28,067 INFO  (migsrc/8733d4a6) [virt.vm]
>> (vmId='8733d4a6-0844-4955-804f-6b919e93e076') starting migration to
>> qemu+tls://ovirt-node02.mgmt.lan.company.lan/system with miguri tcp://
>> 172.16.4.19 (migration:453)
>> 2017-02-21 05:53:28,262 ERROR (migsrc/8733d4a6) [virt.vm]
>> (vmId='8733d4a6-0844-4955-804f-6b919e93e076') operation failed: domain
>> 'DATA_p' is already defined with uuid 8733d4a6-0844-4955-804f-6b919e93e076
>> (migration:265)
>>
>> libvirtError: operation failed: domain 'DATA_p' is already defined with
>> uuid 8733d4a6-0844-4955-804f-6b919e93e076
>>
>> Can you check on host 2 if you have any VM already running there ?
>>
>> You can use :virsh list
>>
>>
>>
>> On Tue, Feb 21, 2017 at 2:15 PM, Sven Achtelik <Sven.Achtelik at eps.aero>
>> wrote:
>>
>> Hi All,
>>
>>
>>
>> I’m having issues with migrating a VM. I have a 3 Host cluster and the VM
>> is able to migrate between host 1 and 3, but not to host 2.  I don’t know
>> why and I tried figuring this out with the log files and had no luck. All
>> other VMs migrate to the host 2 without any issues.
>>
>>
>>
>> If you have some advice for me that would help a lot.
>>
>>
>>
>>
>>
>> Thank you,
>>
>>
>>
>> Sven
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170302/ab07db75/attachment-0001.html>


More information about the Users mailing list