Users
Threads by month
- ----- 2026 -----
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 7 participants
- 19164 discussions
oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain
by Luca 'remix_tj' Lorenzetto 02 Mar '18
by Luca 'remix_tj' Lorenzetto 02 Mar '18
02 Mar '18
Hello,
i need to extract the list of the vms running on a given storage domain.
Copying some code from ansible's ovirt_storage_vms_facts simplified my
work but i stopped with a strange behavior: no vm is listed.
I thought it was an issue with my code, but looking more in detail at
api's i tried opening:
ovirt-engine/api/storagedomains/52b661fe-609e-48f9-beab-f90165b868c4/vms
And what i get is
<vms />
And this for all the storage domains available.
Is there something wrong with the versions i'm running? Do i require
some options in the query?
I'm running RHV, so i can't upgrade to 4.2 yet
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
2
4
Hi,
I have an issue with Hosted Engine when I try to deploy via gui on another
host. There is no errors after deploy but in GUI I see only "Not active"
status HE, and hosted-engine --status shows only 1 node (on both nodes same
output). In hosted-engine.conf I see that host_id is the same as it is on
primary host with HE !? Issue looks quite similar like in
http://lists.ovirt.org/pipermail/users/2018-February/086932.html
Here is config file on newly deployed node :
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
gateway=192.168.8.1
iqn=
conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea
connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044
conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8
user=
host_id=1
bridge=ovirtmgmt
metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67
spUUID=00000000-0000-0000-0000-000000000000
mnt_options=
fqdn=dev-ovirtengine0.somedomain.it
portal=
vm_disk_id=febde231-92cc-4599-8f55-816f63132739
metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830
vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58
domainType=fc
port=
console=vnc
ca_subject="C=EN, L=Test, O=Test, CN=Test"
password=
vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76
lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728
lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a
vdsm_use_ssl=true
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
This is original one:
fqdn=dev-ovirtengine0.somedomain.it
vm_disk_id=febde231-92cc-4599-8f55-816f63132739
vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58
vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76
storage=None
mnt_options=
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=fc
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea
connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.8.1
bridge=ovirtmgmt
metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830
metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67
lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a
lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728
conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8
conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194
# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=
Packages:
ovirt-imageio-daemon-1.0.0-1.el7.noarch
ovirt-host-deploy-1.6.7-1.el7.centos.noarch
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-setup-lib-1.1.4-1.el7.centos.noarch
ovirt-hosted-engine-ha-2.1.8-1.el7.centos.noarch
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-imageio-common-1.0.0-1.el7.noarch
Output from agent.log
MainThread::INFO::2018-03-02
15:01:47,279::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140493346760912
MainThread::INFO::2018-03-02
15:01:51,011::brokerlink::179::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain)
Success, id 140493346759824
MainThread::INFO::2018-03-02
15:01:51,011::hosted_engine::601::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized, all submonitors started
MainThread::INFO::2018-03-02
15:01:51,045::hosted_engine::704::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
/var/run/vdsm/storage/7e7a275c-6939-4f79-85f6-d695209951ea/49e318ad-63a3-4efd-977c-33b8c4c93728/91bcb5cf-006c-42b4-b419-6ac9f841f50a)
MainThread::INFO::2018-03-02
15:04:12,058::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
Regards
Krzysztof
3
2
02 Mar '18
The oVirt Project is pleased to announce the availability of the oVirt
4.2.2 Third Release Candidate, as of March 2nd, 2018
This update is a release candidate of the second in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.4 or later
* CentOS Linux (or similar) 7.4 or later
* oVirt Node 4.2
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available [2]
Additional Resources:
* Read more about the oVirt 4.2.2 release highlights:
http://www.ovirt.org/release/4.2.2/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.2/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
1
0
hosted-engine --deploy : Failed executing ansible-playbook
by geomeid@mairie-saint-ouen.fr 02 Mar '18
by geomeid@mairie-saint-ouen.fr 02 Mar '18
02 Mar '18
--=_2226aaa286bc845bd4ebc501b5a092b0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hello,
I am on CENTOS 7.4.1708
I follow the documentation:
[root@srvvm42 ~]# yum install ovirt-hosted-engine-setup
[root@srvvm42 ~]# yum info ovirt-hosted-engine-setup
Loaded plugins:
fastestmirror, package_upload, product-id, search-disabled-repos,
subscription-manager
This system is not registered with an entitlement
server. You can use subscription-manager to register.
Loading mirror
speeds from cached hostfile
* base: centos.quelquesmots.fr
* epel:
mirrors.ircam.fr
* extras: centos.mirrors.ovh.net
* ovirt-4.2:
ftp.nluug.nl
* ovirt-4.2-epel: mirrors.ircam.fr
* updates:
centos.mirrors.ovh.net
Installed Packages
Name :
ovirt-hosted-engine-setup
Arch : noarch
Version : 2.2.9
Release :
1.el7.centos
Size : 2.3 M
Repo : installed
>From repo : ovirt-4.2
Summary
: oVirt Hosted Engine setup tool
URL : http://www.ovirt.org
License :
LGPLv2+
Description : Hosted Engine setup tool for oVirt
project.
[root@srvvm42 ~]# hosted-engine --deploy
I encounter an issue
when i try to install my hosted-engine. Here is the last line of the
installation:
....
[ INFO ] TASK [Clean /etc/hosts for the engine
VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Copy /etc/hosts back
to the engine VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Clean
/etc/hosts on the host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Add
an entry in /etc/hosts for the target VM]
[ INFO ] changed:
[localhost]
[ INFO ] TASK [Start broker]
[ INFO ] changed: [localhost]
[
INFO ] TASK [Initialize lockspace volume]
[ INFO ] changed:
[localhost]
[ INFO ] TASK [Start agent]
[ INFO ] changed: [localhost]
[
INFO ] TASK [Wait for the engine to come up on the target VM]
[ ERROR ]
fatal: [localhost]: FAILED! => {"msg": "The conditional check
'health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"):
No first item, sequence was empty."}
[ ERROR ] Failed to execute stage
'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean
up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [Gathering
Facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Remove local vm dir]
[
INFO ] changed: [localhost]
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20180302104441.conf'
[
INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ]
Hosted Engine deployment failed: please check the logs for the issue,
fix accordingly or re-deploy from scratch.
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log
And
here is a part of the log file:
2018-03-02 10:44:13,760+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 TASK [debug]
2018-03-02
10:44:13,861+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 changed: False
2018-03-02
10:44:13,962+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 result: {'stderr_lines': [],
u'changed': True, u'end': u'2018-03-02 10:44:13.401854', u'stdou
t':
u'', u'cmd': [u'hosted-engine', u'--reinitialize-lockspace',
u'--force'], 'failed': False, 'attempts': 2, u'stderr': u'', u'rc': 0,
u'delta': u'0:00:00.202734', 'stdout_lines': [], u'start': u'2018-03-02
10:44:13.199120'}
2018-03-02 10:44:14,063+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Start agent]
2018-03-02
10:44:14,565+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 changed: [localhost]
2018-03-02
10:44:14,667+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Wait for the engine to come up
on the target VM]
2018-03-02 10:44:36,555+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 {u'msg': u'The conditional check
'health_result.rc == 0 and
health_result.stdout|from_json|j
son_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."h
ealth"')|first=="good"):
No first item, sequence was empty.'}
2018-03-02 10:44:36,657+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"msg":
"The conditional check 'health_result.rc == 0 and
heal
th_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|js
on_query('*."engine-status"."health"')|first=="good"):
No first item, sequence was empty."}
2018-03-02 10:44:36,759+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180
ansible-playbook rc: 2
2018-03-02 10:44:36,759+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 57
changed: 20 unreachable: 0 skipped: 3 failed: 1
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [ovirtengine.stouen.local] :
ok: 10 changed: 5 unreachable: 0 skipped: 0 failed: 0
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:187 ansible-playbook stdout:
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:189 to retry, use: --limit
@/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.retry
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:190 ansible-playbook stderr:
2018-03-02
10:44:36,761+0100 DEBUG otopi.context context._executeMethod:143 method
exception
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py",
line 193, in _closeup
r = ah.run()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 194, in run
raise RuntimeError(_('Failed executing
ansible-playbook'))
RuntimeError: Failed executing
ansible-playbook
2018-03-02 10:44:36,763+0100 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Closing up': Failed
executing ansible-playbook
2018-03-02 10:44:36,764+0100 DEBUG
otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP -
BEGIN
2018-03-02 10:44:36,765+0100 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'True'
2018-03-02
10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:869 ENV
BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>,
RuntimeError('Failed executing ansible-playbook',), <traceback ob
ject
at 0x3122fc8>)]'
2018-03-02 10:44:36,766+0100 DEBUG otopi.context
context.dumpEnvironment:873 ENVIRONMENT DUMP - END
2018-03-02
10:44:36,767+0100 INFO otopi.context context.runSequence:741 Stage:
Clean up
2018-03-02 10:44:36,767+0100 DEBUG otopi.context
context.runSequence:745 STAGE cleanup
2018-03-02 10:44:36,768+0100 DEBUG
otopi.context context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._cleanup
2018-03-02
10:44:36,769+0100 INFO otopi.plugins.gr_he_ansiblesetup.core.misc
misc._cleanup:236 Cleaning temporary resources
2018-03-02
10:44:36,769+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook',
'--module-path=/usr/share/ovirt-hosted-engine-setup/ans
ible',
'--inventory=localhost,', '--extra-vars=@/tmp/tmpCctJN4',
'/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml']
2018-03-02
10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:154 ansible-playbook: out_path:
/tmp/tmpBm1bE0
2018-03-02 10:44:36,770+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155
ansible-playbook: vars_path: /tmp/tmpCctJN4
2018-03-02 10:44:36,770+0100
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:156 ansible-playbook: env: {'LC_NUMERIC':
'fr_FR.UTF-8', 'HE_ANSIBLE_LOG_PATH':
'/var/log/ovirt-hosted-engin
e-setup/ovirt-hosted-engine-setup-ansible-final_clean-20180302104436-0yt7bk.log',
'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.2.10.112
38120 22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'r
oot', 'USER':
'root', 'HOME': '/root', 'LC_PAPER': 'fr_FR.UTF-8', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin',
'LANG': 'en_US.UTF-8',
'TERM': 'xterm-256color', 'SHELL': '/bin/bash',
'LC_MEASUREMENT': 'fr_FR.UTF-8', 'HISTSIZE': '1000',
'OTOPI_CALLBACK_OF': '/tmp/tmpBm1bE0', 'LC_MONETARY': 'fr_FR.UTF-8',
'XDG_RUNTIME_DIR': '/run/user/0', 'AN
SIBLE_STDOUT_CALLBACK':
'1_otopi_json', 'LC_ADDRESS': 'fr_FR.UTF-8', 'PYTHONPATH':
'/usr/share/ovirt-hosted-engine-setup/scripts/..:',
'SELINUX_ROLE_REQUESTED': '', 'MAIL': '/var/spool/mail/root',
'ANSIBLE_C
ALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger',
'XDG_SESSION_ID': '1', 'LC_IDENTIFICATION': 'fr_FR.UTF-8', 'LS_COLORS':
'rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5
;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar
=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38
;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;
9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.t
ga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13
:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;
13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=3
8;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=
38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:',
'SSH_TTY': '/dev/pts/0', 'H
OSTNAME': 'srvvm42.stouen.local',
'LC_TELEPHONE': 'fr_FR.UTF-8', 'SELINUX_LEVEL_REQUESTED': '',
'HISTCONTROL': 'ignoredups', 'SHLVL': '1', 'PWD': '/root', 'LC_NAME':
'fr_FR.UTF-8', 'OTOPI_LOGFILE':
'/var/log
/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log',
'LC_TIME': 'fr_FR.UTF-8', 'SSH_CONNECTION': '10.2.10.112 38120
10.2.200.130 22', 'OTOPI_EXECDIR': '/root'}
2018-03-02 10:44:37,885+0100
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY [Clean temporary
resources]
2018-03-02 10:44:37,987+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Gathering Facts]
2018-03-02
10:44:40,098+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 ok: [localhost]
2018-03-02
10:44:40,300+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Remove local vm dir]
2018-03-02
10:44:41,105+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 changed: [localhost]
2018-03-02
10:44:41,206+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 2 changed:
1 unreachable: 0 skipped: 0 failed: 0
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180
ansible-playbook rc: 0
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187
ansible-playbook stdout:
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190
ansible-playbook stderr:
2018-03-02 10:44:41,308+0100 DEBUG
otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:238
{}
2018-03-02 10:44:41,311+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.engine.ca.Plugin._cleanup
2018-03-02
10:44:41,312+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,315+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup
2018-03-02
10:44:41,315+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,318+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.vm.cloud_init.Plugin._cleanup
2018-03-02
10:44:41,319+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,320+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file
2018-03-02
10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironment:859
ENVIRONMENT DUMP - BEGIN
2018-03-02 10:44:41,320+0100 DEBUG
otopi.context context.dumpEnvironment:869 ENV
DIALOG/answerFileContent=str:'# OTOPI answer file, generated by human
dialog
[environment:default]
QUESTION/1/CI_VM_ETC_HOST=str:yes
--
I
don't understand the problem. I searched on the web and found nothing.
I'm using an other ovirt in a test environement from a while, but here
i'm really lost...
Thanks for any help !
Georges
--=_2226aaa286bc845bd4ebc501b5a092b0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body style=3D'font-family: Verdana,Geneva,sans-serif'>
<p>Hello,</p>
<p> </p>
<p>I am on CENTOS 7.4.1708</p>
<p> </p>
<p>I follow the documentation:</p>
<p>[root@srvvm42 ~]# yum install ovirt-hosted-engine-setup</p>
<p>[root@srvvm42 ~]# yum info ovirt-hosted-engine-setup<br />Loaded plugins=
: fastestmirror, package_upload, product-id, search-disabled-repos, subscri=
ption-manager<br />This system is not registered with an entitlement server=
=2E You can use subscription-manager to register.<br />Loading mirror speed=
s from cached hostfile<br /> * base: centos.quelquesmots.fr<br /> =
;* epel: mirrors.ircam.fr<br /> * extras: centos.mirrors.ovh.net<br />=
* ovirt-4.2: ftp.nluug.nl<br /> * ovirt-4.2-epel: mirrors.ircam=
=2Efr<br /> * updates: centos.mirrors.ovh.net<br />Installed Packages<=
br />Name : ovirt-hosted-engine-s=
etup<br />Arch : noarch<br />Vers=
ion : 2.2.9<br />Release : =
1.el7.centos<br />Size : 2.3 M<br=
/>Repo : installed<br />From rep=
o : ovirt-4.2<br />Summary : oVirt Host=
ed Engine setup tool<br />URL &nbs=
p; : http://www.ovirt.org<br />License : LGPLv2+<br=
/>Description : Hosted Engine setup tool for oVirt project.<br /><br /><br=
/>[root@srvvm42 ~]# hosted-engine --deploy<br /><br /></p>
<p>I encounter an issue when i try to install my hosted-engine. Here is the=
last line of the installation:</p>
<p>....</p>
<p>[ INFO ] TASK [Clean /etc/hosts for the engine VM]<br />[ INFO&nbs=
p; ] skipping: [localhost]<br />[ INFO ] TASK [Copy /etc/hosts back t=
o the engine VM]<br />[ INFO ] skipping: [localhost]<br />[ INFO =
; ] TASK [Clean /etc/hosts on the host]<br />[ INFO ] changed: [local=
host]<br />[ INFO ] TASK [Add an entry in /etc/hosts for the target V=
M]<br />[ INFO ] changed: [localhost]<br />[ INFO ] TASK [Start=
broker]<br />[ INFO ] changed: [localhost]<br />[ INFO ] TASK =
[Initialize lockspace volume]<br />[ INFO ] changed: [localhost]<br /=
>[ INFO ] TASK [Start agent]<br />[ INFO ] changed: [localhost]=
<br />[ INFO ] TASK [Wait for the engine to come up on the target VM]=
<br />[ ERROR ] fatal: [localhost]: FAILED! =3D> {"msg": "The conditiona=
l check 'health_result.rc =3D=3D 0 and health_result.stdout|from_json|json_=
query('*.\"engine-status\".\"health\"')|first=3D=3D\"good\"' failed. The er=
ror was: error while evaluating conditional (health_result.rc =3D=3D 0 and =
health_result.stdout|from_json|json_query('*.\"engine-status\".\"health\"')=
|first=3D=3D\"good\"): No first item, sequence was empty."}<br />[ ERROR ] =
Failed to execute stage 'Closing up': Failed executing ansible-playbook<br =
/>[ INFO ] Stage: Clean up<br />[ INFO ] Cleaning temporary res=
ources<br />[ INFO ] TASK [Gathering Facts]<br />[ INFO ] ok: [=
localhost]<br />[ INFO ] TASK [Remove local vm dir]<br />[ INFO =
] changed: [localhost]<br />[ INFO ] Generating answer file '/var/li=
b/ovirt-hosted-engine-setup/answers/answers-20180302104441.conf'<br />[ INF=
O ] Stage: Pre-termination<br />[ INFO ] Stage: Termination<br =
/>[ ERROR ] Hosted Engine deployment failed: please check the logs for the =
issue, fix accordingly or re-deploy from scratch.<br /> &n=
bsp; Log file is located at /var/log/ovirt-ho=
sted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log<br />=
<br /></p>
<p> </p>
<p>And here is a part of the log file:</p>
<p>2018-03-02 10:44:13,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansib=
le_utils ansible_utils._process_output:94 TASK [debug]<br />2018-03-02 10:4=
4:13,861+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_u=
tils._process_output:94 changed: False<br />2018-03-02 10:44:13,962+0100 DE=
BUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_ou=
tput:94 result: {'stderr_lines': [], u'changed': True, u'end': u'2018-03-02=
10:44:13.401854', u'stdou<br />t': u'', u'cmd': [u'hosted-engine', u'--rei=
nitialize-lockspace', u'--force'], 'failed': False, 'attempts': 2, u'stderr=
': u'', u'rc': 0, u'delta': u'0:00:00.202734', 'stdout_lines': [], u'start'=
: u'2018-03-02<br /> 10:44:13.199120'}<br />2018-03-02 10:44:14,063+01=
00 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proces=
s_output:100 TASK [Start agent]<br />2018-03-02 10:44:14,565+0100 INFO otop=
i.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100=
changed: [localhost]<br />2018-03-02 10:44:14,667+0100 INFO otopi.ovirt_ho=
sted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wai=
t for the engine to come up on the target VM]<br />2018-03-02 10:44:36,555+=
0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._pro=
cess_output:94 {u'msg': u'The conditional check \'health_result.rc =3D=3D 0=
and health_result.stdout|from_json|j<br />son_query(\'*."engine-status"."h=
ealth"\')|first=3D=3D"good"\' failed. The error was: error while evaluating=
conditional (health_result.rc =3D=3D 0 and health_result.stdout|from_json|=
json_query(\'*."engine-status"."h<br />ealth"\')|first=3D=3D"good"): No fir=
st item, sequence was empty.'}<br />2018-03-02 10:44:36,657+0100 ERROR otop=
i.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 =
fatal: [localhost]: FAILED! =3D> {"msg": "The conditional check 'health_=
result.rc =3D=3D 0 and heal<br />th_result.stdout|from_json|json_query('*=
=2E\"engine-status\".\"health\"')|first=3D=3D\"good\"' failed. The error wa=
s: error while evaluating conditional (health_result.rc =3D=3D 0 and health=
_result.stdout|from_json|js<br />on_query('*.\"engine-status\".\"health\"')=
|first=3D=3D\"good\"): No first item, sequence was empty."}<br />2018-03-02=
10:44:36,759+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansi=
ble_utils.run:180 ansible-playbook rc: 2<br />2018-03-02 10:44:36,759+0100 =
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_=
output:94 PLAY RECAP [localhost] : ok: 57 changed: 20 unreachable: 0 skippe=
d: 3 failed: 1<br />2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_e=
ngine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovir=
tengine.stouen.local] : ok: 10 changed: 5 unreachable: 0 skipped: 0 failed:=
0<br />2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup=
=2Eansible_utils ansible_utils.run:187 ansible-playbook stdout:<br />2018-0=
3-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils =
ansible_utils.run:189 to retry, use: --limit @/usr/share=
/ovirt-hosted-engine-setup/ansible/create_target_vm.retry<br /><br />2018-0=
3-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils =
ansible_utils.run:190 ansible-playbook stderr:<br />2018-03-02 10:44:36,761=
+0100 DEBUG otopi.context context._executeMethod:143 method exception<br />=
Traceback (most recent call last):<br /> File "/usr/lib/python2.7/sit=
e-packages/otopi/context.py", line 133, in _executeMethod<br /> =
method['method']()<br /> File "/usr/share/ovirt-hosted-engine-=
setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py", line 193, i=
n _closeup<br /> r =3D ah.run()<br /> File "/usr/li=
b/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line=
194, in run<br /> raise RuntimeError(_('Failed executing=
ansible-playbook'))<br />RuntimeError: Failed executing ansible-playbook<b=
r />2018-03-02 10:44:36,763+0100 ERROR otopi.context context._executeMethod=
:152 Failed to execute stage 'Closing up': Failed executing ansible-playboo=
k<br />2018-03-02 10:44:36,764+0100 DEBUG otopi.context context.dumpEnviron=
ment:859 ENVIRONMENT DUMP - BEGIN<br />2018-03-02 10:44:36,765+0100 DEBUG o=
topi.context context.dumpEnvironment:869 ENV BASE/error=3Dbool:'True'<br />=
2018-03-02 10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:86=
9 ENV BASE/exceptionInfo=3Dlist:'[(<type 'exceptions.RuntimeError'>, =
RuntimeError('Failed executing ansible-playbook',), <traceback ob<br />j=
ect at 0x3122fc8>)]'<br />2018-03-02 10:44:36,766+0100 DEBUG otopi.conte=
xt context.dumpEnvironment:873 ENVIRONMENT DUMP - END<br />2018-03-02 10:44=
:36,767+0100 INFO otopi.context context.runSequence:741 Stage: Clean up<br =
/>2018-03-02 10:44:36,767+0100 DEBUG otopi.context context.runSequence:745 =
STAGE cleanup<br />2018-03-02 10:44:36,768+0100 DEBUG otopi.context context=
=2E_executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_ansiblesetup=
=2Ecore.misc.Plugin._cleanup<br />2018-03-02 10:44:36,769+0100 INFO otopi=
=2Eplugins.gr_he_ansiblesetup.core.misc misc._cleanup:236 Cleaning temporar=
y resources<br />2018-03-02 10:44:36,769+0100 DEBUG otopi.ovirt_hosted_engi=
ne_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/=
ansible-playbook', '--module-path=3D/usr/share/ovirt-hosted-engine-setup/an=
s<br />ible', '--inventory=3Dlocalhost,', '--extra-vars=3D@/tmp/tmpCctJN4',=
'/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml']<br />2018-=
03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils=
ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpBm1bE0<br />2018=
-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_util=
s ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmpCctJN4<br />20=
18-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_ut=
ils ansible_utils.run:156 ansible-playbook: env: {'LC_NUMERIC': 'fr_FR.UTF-=
8', 'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engin<br />e-setup/ovirt-=
hosted-engine-setup-ansible-final_clean-20180302104436-0yt7bk.log', 'LESSOP=
EN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.2.10.112 38120 22', 'SE=
LINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'r<br />oot', 'USER': 'root', 'HOM=
E': '/root', 'LC_PAPER': 'fr_FR.UTF-8', 'PATH': '/usr/local/sbin:/usr/local=
/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/roo=
t/bin', 'LANG': 'en_US.UTF-8', <br />'TERM': 'xterm-256color', 'SHELL': '/b=
in/bash', 'LC_MEASUREMENT': 'fr_FR.UTF-8', 'HISTSIZE': '1000', 'OTOPI_CALLB=
ACK_OF': '/tmp/tmpBm1bE0', 'LC_MONETARY': 'fr_FR.UTF-8', 'XDG_RUNTIME_DIR':=
'/run/user/0', 'AN<br />SIBLE_STDOUT_CALLBACK': '1_otopi_json', 'LC_ADDRES=
S': 'fr_FR.UTF-8', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scri=
pts/..:', 'SELINUX_ROLE_REQUESTED': '', 'MAIL': '/var/spool/mail/root', 'AN=
SIBLE_C<br />ALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSIO=
N_ID': '1', 'LC_IDENTIFICATION': 'fr_FR.UTF-8', 'LS_COLORS': 'rs=3D0:di=3D3=
8;5;27:ln=3D38;5;51:mh=3D44;38;5;15:pi=3D40;38;5;11:so=3D38;5;13:do=3D38;5;=
5:bd=3D48;5<br />;232;38;5;11:cd=3D48;5;232;38;5;3:or=3D48;5;232;38;5;9:mi=
=3D05;48;5;232;38;5;15:su=3D48;5;196;38;5;15:sg=3D48;5;11;38;5;16:ca=3D48;5=
;196;38;5;226:tw=3D48;5;10;38;5;16:ow=3D48;5;10;38;5;21:st=3D48;5;21;38;5;1=
5:ex=3D38;5;34:*.tar<br />=3D38;5;9:*.tgz=3D38;5;9:*.arc=3D38;5;9:*.arj=3D3=
8;5;9:*.taz=3D38;5;9:*.lha=3D38;5;9:*.lz4=3D38;5;9:*.lzh=3D38;5;9:*.lzma=3D=
38;5;9:*.tlz=3D38;5;9:*.txz=3D38;5;9:*.tzo=3D38;5;9:*.t7z=3D38;5;9:*.zip=3D=
38;5;9:*.z=3D38;5;9:*.Z=3D38;5;9:*.dz=3D38<br />;5;9:*.gz=3D38;5;9:*.lrz=3D=
38;5;9:*.lz=3D38;5;9:*.lzo=3D38;5;9:*.xz=3D38;5;9:*.bz2=3D38;5;9:*.bz=3D38;=
5;9:*.tbz=3D38;5;9:*.tbz2=3D38;5;9:*.tz=3D38;5;9:*.deb=3D38;5;9:*.rpm=3D38;=
5;9:*.jar=3D38;5;9:*.war=3D38;5;9:*.ear=3D38;5;9:*.sar=3D38;5;<br />9:*.rar=
=3D38;5;9:*.alz=3D38;5;9:*.ace=3D38;5;9:*.zoo=3D38;5;9:*.cpio=3D38;5;9:*.7z=
=3D38;5;9:*.rz=3D38;5;9:*.cab=3D38;5;9:*.jpg=3D38;5;13:*.jpeg=3D38;5;13:*=
=2Egif=3D38;5;13:*.bmp=3D38;5;13:*.pbm=3D38;5;13:*.pgm=3D38;5;13:*.ppm=3D38=
;5;13:*.t<br />ga=3D38;5;13:*.xbm=3D38;5;13:*.xpm=3D38;5;13:*.tif=3D38;5;13=
:*.tiff=3D38;5;13:*.png=3D38;5;13:*.svg=3D38;5;13:*.svgz=3D38;5;13:*.mng=3D=
38;5;13:*.pcx=3D38;5;13:*.mov=3D38;5;13:*.mpg=3D38;5;13:*.mpeg=3D38;5;13:*=
=2Em2v=3D38;5;13:*.mkv=3D38;5;<br />13:*.webm=3D38;5;13:*.ogm=3D38;5;13:*=
=2Emp4=3D38;5;13:*.m4v=3D38;5;13:*.mp4v=3D38;5;13:*.vob=3D38;5;13:*.qt=3D38=
;5;13:*.nuv=3D38;5;13:*.wmv=3D38;5;13:*.asf=3D38;5;13:*.rm=3D38;5;13:*.rmvb=
=3D38;5;13:*.flc=3D38;5;13:*.avi=3D38;5;13:*.fli=3D3<br />8;5;13:*.flv=3D38=
;5;13:*.gl=3D38;5;13:*.dl=3D38;5;13:*.xcf=3D38;5;13:*.xwd=3D38;5;13:*.yuv=
=3D38;5;13:*.cgm=3D38;5;13:*.emf=3D38;5;13:*.axv=3D38;5;13:*.anx=3D38;5;13:=
*.ogv=3D38;5;13:*.ogx=3D38;5;13:*.aac=3D38;5;45:*.au=3D38;5;45:*.flac=3D<br=
/>38;5;45:*.mid=3D38;5;45:*.midi=3D38;5;45:*.mka=3D38;5;45:*.mp3=3D38;5;45=
:*.mpc=3D38;5;45:*.ogg=3D38;5;45:*.ra=3D38;5;45:*.wav=3D38;5;45:*.axa=3D38;=
5;45:*.oga=3D38;5;45:*.spx=3D38;5;45:*.xspf=3D38;5;45:', 'SSH_TTY': '/dev/p=
ts/0', 'H<br />OSTNAME': 'srvvm42.stouen.local', 'LC_TELEPHONE': 'fr_FR.UTF=
-8', 'SELINUX_LEVEL_REQUESTED': '', 'HISTCONTROL': 'ignoredups', 'SHLVL': '=
1', 'PWD': '/root', 'LC_NAME': 'fr_FR.UTF-8', 'OTOPI_LOGFILE': '/var/log<br=
/>/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzc=
op.log', 'LC_TIME': 'fr_FR.UTF-8', 'SSH_CONNECTION': '10.2.10.112 38120 10=
=2E2.200.130 22', 'OTOPI_EXECDIR': '/root'}<br />2018-03-02 10:44:37,885+01=
00 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proce=
ss_output:94 PLAY [Clean temporary resources]<br />2018-03-02 10:44:37,987+=
0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proc=
ess_output:100 TASK [Gathering Facts]<br />2018-03-02 10:44:40,098+0100 INF=
O otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_outp=
ut:100 ok: [localhost]<br />2018-03-02 10:44:40,300+0100 INFO otopi.ovirt_h=
osted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Re=
move local vm dir]<br />2018-03-02 10:44:41,105+0100 INFO otopi.ovirt_hoste=
d_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [lo=
calhost]<br />2018-03-02 10:44:41,206+0100 DEBUG otopi.ovirt_hosted_engine_=
setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost]=
: ok: 2 changed: 1 unreachable: 0 skipped: 0 failed: 0<br />2018-03-02 10:=
44:41,307+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_=
utils.run:180 ansible-playbook rc: 0<br />2018-03-02 10:44:41,307+0100 DEBU=
G otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansib=
le-playbook stdout:<br />2018-03-02 10:44:41,307+0100 DEBUG otopi.ovirt_hos=
ted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stder=
r:<br />2018-03-02 10:44:41,308+0100 DEBUG otopi.plugins.gr_he_ansiblesetup=
=2Ecore.misc misc._cleanup:238 {}<br />2018-03-02 10:44:41,311+0100 DEBUG o=
topi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins=
=2Egr_he_common.engine.ca.Plugin._cleanup<br />2018-03-02 10:44:41,312+0100=
DEBUG otopi.context context._executeMethod:135 condition False<br />2018-0=
3-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:128 Stage=
cleanup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup<br =
/>2018-03-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:1=
35 condition False<br />2018-03-02 10:44:41,318+0100 DEBUG otopi.context co=
ntext._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm=
=2Ecloud_init.Plugin._cleanup<br />2018-03-02 10:44:41,319+0100 DEBUG otopi=
=2Econtext context._executeMethod:135 condition False<br />2018-03-02 10:44=
:41,320+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup M=
ETHOD otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file<b=
r />2018-03-02 10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironmen=
t:859 ENVIRONMENT DUMP - BEGIN<br />2018-03-02 10:44:41,320+0100 DEBUG otop=
i.context context.dumpEnvironment:869 ENV DIALOG/answerFileContent=3Dstr:'#=
OTOPI answer file, generated by human dialog<br />[environment:default]<br=
/>QUESTION/1/CI_VM_ETC_HOST=3Dstr:yes<br /><br /></p>
<p> </p>
<div>
<pre>--</pre>
</div>
<p> </p>
<p>I don't understand the problem. I searched on the web and found nothing=
=2E</p>
<p> </p>
<p>I'm using an other ovirt in a test environement from a while, but here i=
'm really lost...</p>
<p> </p>
<p>Thanks for any help !</p>
<p>Georges</p>
</body></html>
--=_2226aaa286bc845bd4ebc501b5a092b0--
3
2
upgrade domain V3.6 to V4.2 with disks and snapshots failed on NFS, Export, QCOW V2/V3 renaming probl.
by Oliver Riesener 02 Mar '18
by Oliver Riesener 02 Mar '18
02 Mar '18
Hi,
after upgrading cluster compatibility from V3.6 to V4.2,
i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS
and Export Domains.
Non UI command solves this problem, all stuck, they worsen disk state to
"Illegal" and
produces "Async Tasks" these newer ends.
It seems that the disk file on storage domain has been renamed with
addition -NNNNNNNNNNNN
but the QCOW backing file locations in data files are not updated. They
point to old disk names.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la
insgesamt 4983972
drwxr-xr-x. 2 vdsm kvm 4096 28. Feb 12:57 .
drwxr-xr-x. 64 vdsm kvm 4096 1. Mär 12:02 ..
-rw-rw----. 1 vdsm kvm 53687091200 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.lease
-rw-r--r--. 1 vdsm kvm 319 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.meta
-rw-rw----. 1 vdsm kvm 966393856 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.lease
-rw-r--r--. 1 vdsm kvm 264 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.meta
-rw-rw----. 1 vdsm kvm 2155806720 14. Feb 11:53
67f96ffc-3a4f-4f3d-9c1b-46293e0be762
-rw-rw----. 1 vdsm kvm 1048576 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease
-rw-r--r--. 1 vdsm kvm 260 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file *
239c0ffc-8249-4d08-967a-619abbbb897a: x86 boot sector; partition
1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code
offset 0xb8
239c0ffc-8249-4d08-967a-619abbbb897a.lease: data
239c0ffc-8249-4d08-967a-619abbbb897a.meta: ASCII text
2f773536-9b60-4f53-b179-dbf64d182a41: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a),
53687091200 bytes
2f773536-9b60-4f53-b179-dbf64d182a41.lease: data
2f773536-9b60-4f53-b179-dbf64d182a41.meta: ASCII text
67f96ffc-3a4f-4f3d-9c1b-46293e0be762: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179),
53687091200 bytes
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta: ASCII text
My solution is, to hard-link the disk files to old names too. Then the
disk could be handled by UI again.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179
To fix the illegal disk state, i manipulate the postgres database
directly, thanx to ovirt-users mailing list.
Rescan of DIsks in the UI could also work, i will test it in the
evening, i have a lot of old exported disks with snapshots ...
Is there a smarter way to do it ?
Cheers!
Olri
1
0
Hi,
I am having an issue migrating all vm's based on a specific template. The
template was created in a previous ovirt environment (4.1), and all VM's
deployed from this template experience the same issue.
I would like to find a resolution to both the template and vm's that are
already deployed from this template. The VM in question is VDI-Bryan and
the migration starts around 12:25. I have attached the engine.log and the
vdsm.log file from the destination server.
Thanks
Bryan
2
3
upgrading disks with snapshots from V3.6 domain failed on NFS and Export domain V4, QCOW V2 to V3 renaming
by Oliver Riesener 02 Mar '18
by Oliver Riesener 02 Mar '18
02 Mar '18
Hi,
after upgrading cluster compatibility from V3.6 to V4.2,
i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS
and Export Domains.
Non UI command solves this problem, all stuck, they worsen disk state to
"Illegal" and
produces "Async Tasks" these newer ends.
It seems that the disk file on storage domain has been renamed with
addition -NNNNNNNNNNNN
but the QCOW backing file locations in data files are not updated. They
point to old disk names.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la
insgesamt 4983972
drwxr-xr-x. 2 vdsm kvm 4096 28. Feb 12:57 .
drwxr-xr-x. 64 vdsm kvm 4096 1. Mär 12:02 ..
-rw-rw----. 1 vdsm kvm 53687091200 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.lease
-rw-r--r--. 1 vdsm kvm 319 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.meta
-rw-rw----. 1 vdsm kvm 966393856 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.lease
-rw-r--r--. 1 vdsm kvm 264 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.meta
-rw-rw----. 1 vdsm kvm 2155806720 14. Feb 11:53
67f96ffc-3a4f-4f3d-9c1b-46293e0be762
-rw-rw----. 1 vdsm kvm 1048576 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease
-rw-r--r--. 1 vdsm kvm 260 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file *
239c0ffc-8249-4d08-967a-619abbbb897a: x86 boot sector; partition
1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code
offset 0xb8
239c0ffc-8249-4d08-967a-619abbbb897a.lease: data
239c0ffc-8249-4d08-967a-619abbbb897a.meta: ASCII text
2f773536-9b60-4f53-b179-dbf64d182a41: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a),
53687091200 bytes
2f773536-9b60-4f53-b179-dbf64d182a41.lease: data
2f773536-9b60-4f53-b179-dbf64d182a41.meta: ASCII text
67f96ffc-3a4f-4f3d-9c1b-46293e0be762: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179),
53687091200 bytes
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta: ASCII text
My solution is, to hard-link the disk files to old names too. Then the
disk could be handled by UI again.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179
To fix the illegal disk state, i manipulate the postgres database
directly, thanx to ovirt-users mailing list.
Rescan of DIsks in the UI could also work, i will test it in the
evening, i have a lot of old exported diskswith snapshots ...
Is there a smarter way to do it ?
Cheers
Olri
1
0
------=_Part_20671798_615140934.1519924287288
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once it freezes with the windows logo. Is there any special requirements/parameter to run windows machines in this version?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_20671798_615140934.1519924287288
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: trebuchet ms,sans-serif; font-size: 12pt; color: #000000"><div>Hi,<br></div><div><br data-mce-bogus="1"></div><div>I'm trying to install a windows VM on Ovirt 4.2 but when doing a run once it freezes with the windows logo. Is there any special requirements/parameter to run windows machines in this version?<br data-mce-bogus="1"></div><div><br data-mce-bogus="1"></div><div>Thanks<br data-mce-bogus="1"></div><div><br></div><div data-marker="__SIG_POST__">-- <br></div><div><hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br></div></div></body></html>
------=_Part_20671798_615140934.1519924287288--
2
4
------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147------
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 29649
Hi all,
I discovered yesterday a problem when migrating VM with more than one v=
disk.
On our test servers (oVirt4.1, shared storage with Gluster), I created =
2 VMs needed for a test, from a template with a 20G vdisk. On this VMs =
I added a 100G vdisk (for this tests I didn't want to waste time to ext=
end the existing vdisks... But I lost time finally...). The VMs with th=
e 2 vdisks works well.
Now I saw some updates waiting on the host. I tried to put it in mainte=
nance... But it stopped on the two VM. They were marked "migrating", bu=
t no more accessible. Other (small) VMs with only 1 vdisk was migrated =
without problem at the same time.
I saw that a kvm process for the (big) VMs was launched on the source A=
ND destination host, but after tens of minutes, the migration and the V=
Ms was always freezed. I tried to cancel the migration for the VMs : fa=
iled. The only way to stop it was to poweroff the VMs : the kvm process=
died on the 2 hosts and the GUI alerted on a failed migration.
In doubt, I tried to delete the second vdisk on one of this VMs : it mi=
grates then without error ! And no access problem.
I tried to extend the first vdisk of the second VM, the delete the seco=
nd vdisk : it migrates now without problem !=C2=A0=C2=A0=C2=A0
So after another test with a VM with 2 vdisks, I can say that this bloc=
ked the migration process :(
In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-1=
2 16:46:29,705+01 INFO =C2=A0[org.ovirt.engine.core.bll.MigrateVmToServ=
erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc=
k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10=
-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'
2018-02-12 16:46:29,955+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-=
46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: 3f57e669-5e4c-4d10-85cc-d5730=
04a099d Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:46:30,261+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0=
.6', dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.=
168.0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 14f61ee0
2018-02-12 16:46:30,262+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(H=
ostName =3D victor.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3=
f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=
=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 775cd381
2018-02-12 16:46:30,277+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) =
[2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand,=
log id: 775cd381
2018-02-12 16:46:30,285+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-4=
6a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 14f61ee0
2018-02-12 16:46:30,301+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-3=
2) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID=
: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FSECON=
DARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.=
fr, User: admin@internal-authz).
2018-02-12 16:46:31,106+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] STAR=
T, FullListVDSCommand(HostName =3D victor.local.systea.fr, FullListVDSC=
ommandParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-=
f17d7cd87bb1', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log =
id: 54b4b435
2018-02-12 16:46:31,147+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D=
{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QE=
MU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtru=
e, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guest=
NumaNodes=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, cust=
om=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4d=
f1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879=
c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d57=
3004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specP=
arams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial=
, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', =
deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null',=
logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c=
6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F=
8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5=
d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac=
-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d=
evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',=
address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge=
d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie=
s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null=
'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE=
R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D=
0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope=
rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'=
null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-=
4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d=
b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a=
a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'=
unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D=
1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, ma=
xMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85c=
c-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuarantee=
dSize=3D8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3D=
ovirtmgmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVC=
pus=3D16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log=
id: 54b4b435
2018-02-12 16:46:31,150+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] F=
etched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:31,151+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) was unexpectedly det=
ected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(gi=
nger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb=
1')
2018-02-12 16:46:31,152+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e66=
9-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-=
8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
....
2018-02-12 16:46:41,631+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22=
-840a-f17d7cd87bb1'(victor.local.systea.fr)
2018-02-12 16:46:41,632+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, Destr=
oyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd8=
7bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false', =
secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D't=
rue'}), log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, Dest=
royVDSCommand, log id: 560eca57
2018-02-12 16:46:41,650+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-=
4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' =
--> 'Down'
2018-02-12 16:46:41,651+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3=
f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) to Host 'd569c=
2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo'
2018-02-12 16:46:42,163+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4=
d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingTo' -->=
'Up'
2018-02-12 16:46:42,169+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, =
MigrateStatusVDSCommand(HostName =3D ginger.local.systea.fr, MigrateSta=
tusVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-487=
8-8aea-858db285cf69', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}), =
log id: 7a25c281
2018-02-12 16:46:42,174+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH,=
MigrateStatusVDSCommand, log id: 7a25c281
2018-02-12 16:46:42,194+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVEN=
T=5FID: VM=5FMIGRATION=5FDONE(63), Correlation ID: 2f712024-5982-46a8-8=
2c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call St=
ack: null, Custom ID: null, Custom Event ID: -1, Message: Migration com=
pleted (VM: Oracle=5FSECONDARY, Source: victor.local.systea.fr, Destina=
tion: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, =
Actual downtime: (N/A))
2018-02-12 16:46:42,201+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object '=
EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DV=
M]', sharedLocks=3D''}'
2018-02-12 16:46:42,203+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullL=
istVDSCommand(HostName =3D ginger.local.systea.fr, FullListVDSCommandPa=
rameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285=
cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc6=
5298
2018-02-12 16:46:42,254+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, Full=
ListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440f=
x-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D18748,=
guestDiskMapping=3D{}, transparentHugePages=3Dtrue, timeOffset=3D0, cp=
uType=3DNehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@760085fd=
, custom=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D=
'879c93ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc=
-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', s=
pecParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-se=
rial, port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'fals=
e', deviceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'nu=
ll', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93=
-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254dev=
ice=5F8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-b=
f0c-b5d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c=
4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d=
'}', device=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D=
'[]', address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', p=
lugged=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customProp=
erties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D=
'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTR=
OLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bu=
s=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'fal=
se', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customP=
roperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=
=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c9=
3ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1=
908db=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485=
-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=
=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addres=
s=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', manage=
d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann=
el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null=
', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSock=
et=3D1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D=
32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a0=
99d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse=
, maxMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNe=
twork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@2e4d3dd3, memGuarantee=
dSize=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304259600, disp=
lay=3Dvnc}], log id: 7cc65298
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85=
cc-d573004a099d devices, skipping device: {device=3Dvnc, specParams=3D{=
displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=
=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D59=
01}
2018-02-12 16:46:42,257+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a=
lease Device without an address when processing VM 3f57e669-5e4c-4d10-=
85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c=
-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16=
, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D62=
91456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0=
.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, typ=
e=3Dlease}
2018-02-12 16:46:46,260+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINI=
SH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3D=
pc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletEnable=3Dtrue, pid=3D=
18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5Fd890fa68-fba4-4f49=
-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{name=3D/dev/sr0}}, =
transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2=
, guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=3D{device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56503=
9fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af=
02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', devi=
ce=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', addr=
ess=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', mana=
ged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'cha=
nnel0', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'nu=
ll', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb27=
4device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-41=
56-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmD=
evice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284=
d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet'=
, type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=
=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'true', read=
Only=3D'false', deviceAlias=3D'input0', customProperties=3D'[]', snapsh=
otId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5Ffbd=
dd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceI=
d=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-=
85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0=
x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', sn=
apshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, device=5F=
fbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-af02-56=
5039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D=
'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D=
'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHAN=
NEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controll=
er=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false', plugged=3D=
'true', readOnly=3D'false', deviceAlias=3D'channel1', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}},=
vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5F=
SECONDARY, nice=3D0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3D=
false, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2,=
smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmE=
nable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtmgmt, devic=
es=3D[Ljava.lang.Object;@286410fd, memGuaranteedSize=3D8192, maxVCpus=3D=
16, clientIp=3D, statusTime=3D4304263620, display=3Dvnc}], log id: 58cd=
ef4c
2018-02-12 16:46:46,267+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a vnc Device without an address when processing VM 3f57e669-5e4c=
-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc, specPa=
rams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.=
5}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, p=
ort=3D5901}
2018-02-12 16:46:46,268+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Re=
ceived a lease Device without an address when processing VM 3f57e669-5e=
4c-4d10-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e=
669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920f=
dc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, off=
set=3D6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/1=
92.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xlea=
ses, type=3Dlease}
=C2=A0
For the VM with 2 vdisks we see :
2018-02-12 16:49:06,112+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7=
458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec12-=
627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}'
2018-02-12 16:49:06,407+01 INFO =C2=A0[org.ovirt.engine.core.bll.Migrat=
eVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-=
4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand inter=
nal: false. Entities affected : =C2=A0ID: f7d4ec12-627a-4b83-b59e-88640=
0d55474 Type: VMAction group MIGRATE=5FVM with role type USER
2018-02-12 16:49:06,712+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParam=
eters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf6=
9', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0=
.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.=
168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', =
migrationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'fa=
lse', consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D=
'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', conve=
rgenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stall=
ing=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {lim=
it=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, act=
ion=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=
=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDownt=
ime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]=
}}]]'}), log id: 3702a9e0
2018-02-12 16:49:06,713+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(H=
ostName =3D ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsy=
nc=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'f=
7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsId=
=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:54321=
', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDown=
time=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', console=
Address=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', max=
IncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSched=
ule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{lim=
it=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, act=
ion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=
=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDownt=
ime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=
=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), lo=
g id: 1840069c
2018-02-12 16:49:06,724+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) =
[92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand,=
log id: 1840069c
2018-02-12 16:49:06,732+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4=
142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom=
, log id: 3702a9e0
2018-02-12 16:49:06,753+01 INFO =C2=A0[org.ovirt.engine.core.dal.dbbrok=
er.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-4=
9) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=5FMIGRATION=5F=
START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID=
: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: nu=
ll, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FPRIMA=
RY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr=
, User: admin@internal-authz).
...
2018-02-12 16:49:16,453+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] F=
etched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:16,455+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
...
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detec=
ted as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(vict=
or.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69'=
)
2018-02-12 16:49:31,484+01 INFO =C2=A0[org.ovirt.engine.core.vdsbroker.=
monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec1=
2-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-=
840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh u=
ntil migration is done
=C2=A0
and so on, last lines repeated indefinitly for hours since we poweroff =
the VM...
Is this something known ? Any idea about that ?
Thanks
Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1.
--
Cordialement,
Frank Soyer
------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147------
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Length: 30401
<html><p>Hi all,<br />I discovered yesterday a problem when migrating V=
M with more than one vdisk.<br />On our test servers (oVirt4.1, shared =
storage with Gluster), I created 2 VMs needed for a test, from a templa=
te with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I=
didn't want to waste time to extend the existing vdisks... But I lost =
time finally...). The VMs with the 2 vdisks works well.<br />Now I saw =
some updates waiting on the host. I tried to put it in maintenance... B=
ut it stopped on the two VM. They were marked "migrating", but no more =
accessible. Other (small) VMs with only 1 vdisk was migrated without pr=
oblem at the same time.<br />I saw that a kvm process for the (big) VMs=
was launched on the source AND destination host, but after tens of min=
utes, the migration and the VMs was always freezed. I tried to cancel t=
he migration for the VMs : failed. The only way to stop it was to power=
off the VMs : the kvm process died on the 2 hosts and the GUI alerted o=
n a failed migration.<br />In doubt, I tried to delete the second vdisk=
on one of this VMs : it migrates then without error ! And no access pr=
oblem.<br />I tried to extend the first vdisk of the second VM, the del=
ete the second vdisk : it migrates now without problem ! &nb=
sp;<br /><br />So after another test with a VM with 2 vdisks, I can say=
that this blocked the migration process :(<br /><br />In engine.log, f=
or a VMs with 1 vdisk migrating well, we see :</p><blockquote>2018-02-1=
2 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServ=
erCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Loc=
k Acquired to object 'EngineLock:{exclusiveLocks=3D'[3f57e669-5e4c-4d10=
-85cc-d573004a099d=3DVM]', sharedLocks=3D''}'<br />2018-02-12 16:46:29,=
955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] =
(org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da57=
25] Running command: MigrateVmToServerCommand internal: false. Entities=
affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMActi=
on group MIGRATE=5FVM with role type USER<br />2018-02-12 16:46:30,261+=
01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.=
ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] S=
TART, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync=3D'true'=
, hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId=3D'3f57e669-5e4=
c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6', dstVdsId=3D'd569c2d=
d-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.0.5:54321', migratio=
nMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDowntime=3D'0',=
autoConverge=3D'true', migrateCompressed=3D'false', consoleAddress=3D'=
null', maxBandwidth=3D'500', enableGuestEvents=3D'true', maxIncomingMig=
rations=3D'2', maxOutgoingMigrations=3D'2', convergenceSchedule=3D'[ini=
t=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{limit=3D1, act=
ion=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, action=3D{name=
=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{name=3DsetDownt=
ime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDowntime, params=
=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, params=3D[500]}},=
{limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), log id: 14f61=
ee0<br />2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.v=
dsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-th=
read-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDS=
Command(HostName =3D victor.local.systea.fr, MigrateVDSCommandParameter=
s:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', =
vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost=3D'192.168.0.6',=
dstVdsId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', dstHost=3D'192.168.=
0.5:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migr=
ationDowntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false'=
, consoleAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D't=
rue', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', converg=
enceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stallin=
g=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=
=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, actio=
n=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3D=
setDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime=
, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]=
]'}), log id: 775cd381<br />2018-02-12 16:46:30,277+01 INFO [org.=
ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovi=
rt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINI=
SH, MigrateBrokerVDSCommand, log id: 775cd381<br />2018-02-12 16:46:30,=
285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (=
org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da572=
5] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0<b=
r />2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db=
broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thre=
ad-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT=5FID: VM=5FMIGRATIO=
N=5FSTART(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Jo=
b ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID=
: null, Custom Event ID: -1, Message: Migration started (VM: Oracle=5FS=
ECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.sys=
tea.fr, User: admin@internal-authz).<br />2018-02-12 16:46:31,106+01 IN=
FO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]=
(DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostNam=
e =3D victor.local.systea.fr, FullListVDSCommandParameters:{runAsync=3D=
'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds=3D'[3f57=
e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435<br />2018-02-12 =
16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.F=
ullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullLis=
tVDSCommand, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-r=
hel7.3.0, tabletEnable=3Dtrue, pid=3D1493, guestDiskMapping=3D{0QEMU=5F=
QEMU=5FHARDDISK=5Fd890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-=
ROM=5FQM00003=3D{name=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOf=
fset=3D0, cpuType=3DNehalem, smp=3D2, pauseCode=3DNOERR, guestNumaNodes=
=3D[Ljava.lang.Object;@1d9042cd, smartcardEnable=3Dfalse, custom=3D{dev=
ice=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435c-a=
f02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df=
1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d=
'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'=
[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D=
1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceAli=
as=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', logicalN=
ame=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286-18=
0e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F8945f61=
a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5d955728=
4d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b=
5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D=
'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]', address=
=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugged=3D'tru=
e', readOnly=3D'false', deviceAlias=3D'input0', customProperties=3D'[]'=
, snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, devi=
ce=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:=
{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f57e669-5e=
4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLER', boot=
Order=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, do=
main=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=
=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D=
'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null'}, =
device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1-435=
c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDe=
vice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908d=
b', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', t=
ype=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D=
0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D'false',=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1', custom=
Properties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevic=
e=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D1, vmNa=
me=3DOracle=5FSECONDARY, nice=3D0, status=3DMigration Source, maxMemSiz=
e=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d5730=
04a099d, numOfIoThreads=3D2, smpThreadsPerCore=3D1, memGuaranteedSize=3D=
8192, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwork=3Dovirtm=
gmt, devices=3D[Ljava.lang.Object;@28ae66d7, display=3Dvnc, maxVCpus=3D=
16, clientIp=3D, statusTime=3D4299484520, maxMemSlots=3D16}], log id: 5=
4b4b435<br />2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.co=
re.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1)=
[27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf6=
9'<br />2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vd=
sbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a6=
5b66] Received a vnc Device without an address when processing VM 3f57e=
669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=3Dvnc=
, specParams=3D{displayNetwork=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D19=
2.168.0.6}, type=3Dgraphics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2=
c442d, port=3D5901}<br />2018-02-12 16:46:31,151+01 INFO [org.ovi=
rt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartz=
Scheduler9) [54a65b66] Received a lease Device without an address when =
processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping de=
vice: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e5=
1cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{uuid=3Da09949aa-5642-4b6=
d-94a4-8b0d04257be5}, offset=3D6291456, device=3Dlease, path=3D/rhev/da=
ta-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-9=
20fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<br />2018-02-12 16:46:31,15=
2+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]=
(DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d5730=
04a099d'(Oracle=5FSECONDARY) was unexpectedly detected as 'MigratingTo'=
on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) =
(expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1')<br />2018-02-12 16=
:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.Vm=
Analyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-8=
5cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285=
cf69'(ginger.local.systea.fr) ignoring it in the refresh until migratio=
n is done<br />....<br />2018-02-12 16:46:41,631+01 INFO [org.ovi=
rt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-=
11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down o=
n VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr)<br=
/>2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbrok=
er.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, De=
stroyVDSCommand(HostName =3D victor.local.systea.fr, DestroyVmVDSComman=
dParameters:{runAsync=3D'true', hostId=3D'ce3938b1-b23f-4d22-840a-f17d7=
cd87bb1', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d', force=3D'false=
', secondsToWait=3D'0', gracefully=3D'false', reason=3D'', ignoreNoVm=3D=
'true'}), log id: 560eca57<br />2018-02-12 16:46:41,650+01 INFO [=
org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinP=
ool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57<br />20=
18-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.mo=
nitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d=
10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved from 'MigratingFrom' --=
> 'Down'<br />2018-02-12 16:46:41,651+01 INFO [org.ovirt.engin=
e.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] H=
anding over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDAR=
Y) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status=
'MigratingTo'<br />2018-02-12 16:46:42,163+01 INFO [org.ovirt.en=
gine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) []=
VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle=5FSECONDARY) moved fr=
om 'MigratingTo' --> 'Up'<br />2018-02-12 16:46:42,169+01 INFO  =
;[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (F=
orkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName =3D =
ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync=3D'=
true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'3f57e66=
9-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281<br />2018-02-12 16:4=
6:42,174+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.Migra=
teStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusV=
DSCommand, log id: 7a25c281<br />2018-02-12 16:46:42,194+01 INFO =
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] =
(ForkJoinPool-1-worker-4) [] EVENT=5FID: VM=5FMIGRATION=5FDONE(63), Cor=
relation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc9=
9-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Eve=
nt ID: -1, Message: Migration completed (VM: Oracle=5FSECONDARY, Source=
: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration=
: 11 seconds, Total: 11 seconds, Actual downtime: (N/A))<br />2018-02-1=
2 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServ=
erCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLoc=
k:{exclusiveLocks=3D'[3f57e669-5e4c-4d10-85cc-d573004a099d=3DVM]', shar=
edLocks=3D''}'<br />2018-02-12 16:46:42,203+01 INFO [org.ovirt.en=
gine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worke=
r-4) [] START, FullListVDSCommand(HostName =3D ginger.local.systea.fr, =
FullListVDSCommandParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f3=
0-4878-8aea-858db285cf69', vmIds=3D'[3f57e669-5e4c-4d10-85cc-d573004a09=
9d]'}), log id: 7cc65298<br />2018-02-12 16:46:42,254+01 INFO [or=
g.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPo=
ol-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtr=
ue, emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tab=
letEnable=3Dtrue, pid=3D18748, guestDiskMapping=3D{}, transparentHugePa=
ges=3Dtrue, timeOffset=3D0, cpuType=3DNehalem, smp=3D2, guestNumaNodes=3D=
[Ljava.lang.Object;@760085fd, custom=3D{device=5Ffbddd528-7d93-49c6-a28=
6-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254=3DVmDevice:=
{id=3D'VmDeviceId:{deviceId=3D'879c93ab-4df1-435c-af02-565039fcc254', v=
mId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'unix', type=3D=
'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, con=
troller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false', plugg=
ed=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProper=
ties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'n=
ull'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4=
df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908db=
device=5F017b5e59-01c4-4aac-bf0c-b5d9557284d6=3DVmDevice:{id=3D'VmDevic=
eId:{deviceId=3D'017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId=3D'3f57e66=
9-5e4c-4d10-85cc-d573004a099d'}', device=3D'tablet', type=3D'UNKNOWN', =
bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, type=3Dusb, po=
rt=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', devi=
ceAlias=3D'input0', customProperties=3D'[]', snapshotId=3D'null', logic=
alName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-a286=
-180e021cb274=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'fbddd528-7d93-4=
9c6-a286-180e021cb274', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}'=
, device=3D'ide', type=3D'CONTROLLER', bootOrder=3D'0', specParams=3D'[=
]', address=3D'{slot=3D0x01, bus=3D0x00, domain=3D0x0000, type=3Dpci, f=
unction=3D0x1}', managed=3D'false', plugged=3D'true', readOnly=3D'false=
', deviceAlias=3D'ide', customProperties=3D'[]', snapshotId=3D'null', l=
ogicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-=
a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F=
8945f61a-abbe-4156-8485-a4aa6f1908db=3DVmDevice:{id=3D'VmDeviceId:{devi=
ceId=3D'8945f61a-abbe-4156-8485-a4aa6f1908db', vmId=3D'3f57e669-5e4c-4d=
10-85cc-d573004a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D=
'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvir=
tio-serial, port=3D2}', managed=3D'false', plugged=3D'true', readOnly=3D=
'false', deviceAlias=3D'channel1', customProperties=3D'[]', snapshotId=3D=
'null', logicalName=3D'null', hostDevice=3D'null'}}, vmType=3Dkvm, memS=
ize=3D8192, smpCoresPerSocket=3D1, vmName=3DOracle=5FSECONDARY, nice=3D=
0, status=3DUp, maxMemSize=3D32768, bootMenuEnable=3Dfalse, vmId=3D3f57=
e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=3D2, smpThreadsPerCore=
=3D1, smartcardEnable=3Dfalse, maxMemSlots=3D16, kvmEnable=3Dtrue, pitR=
einjection=3Dfalse, displayNetwork=3Dovirtmgmt, devices=3D[Ljava.lang.O=
bject;@2e4d3dd3, memGuaranteedSize=3D8192, maxVCpus=3D16, clientIp=3D, =
statusTime=3D4304259600, display=3Dvnc}], log id: 7cc65298<br />2018-02=
-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc De=
vice without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573=
004a099d devices, skipping device: {device=3Dvnc, specParams=3D{display=
Network=3Dovirtmgmt, keyMap=3Dfr, displayIp=3D192.168.0.5}, type=3Dgrap=
hics, deviceId=3D813957b1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br =
/>2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbroke=
r.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received=
a lease Device without an address when processing VM 3f57e669-5e4c-4d1=
0-85cc-d573004a099d devices, skipping device: {lease=5Fid=3D3f57e669-5e=
4c-4d10-85cc-d573004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa=
16, deviceId=3D{uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D=
6291456, device=3Dlease, path=3D/rhev/data-center/mnt/glusterSD/192.168=
.0.6:=5FDATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, t=
ype=3Dlease}<br />2018-02-12 16:46:46,260+01 INFO [org.ovirt.engi=
ne.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler=
5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=3Dtrue, =
emulatedMachine=3Dpc-i440fx-rhel7.3.0, afterMigrationStatus=3D, tabletE=
nable=3Dtrue, pid=3D18748, guestDiskMapping=3D{0QEMU=5FQEMU=5FHARDDISK=5F=
d890fa68-fba4-4f49-9=3D{name=3D/dev/sda}, QEMU=5FDVD-ROM=5FQM00003=3D{n=
ame=3D/dev/sr0}}, transparentHugePages=3Dtrue, timeOffset=3D0, cpuType=3D=
Nehalem, smp=3D2, guestNumaNodes=3D[Ljava.lang.Object;@77951faf, custom=
=3D{device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-4df1=
-435c-af02-565039fcc254=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'879c9=
3ab-4df1-435c-af02-565039fcc254', vmId=3D'3f57e669-5e4c-4d10-85cc-d5730=
04a099d'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specPar=
ams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, =
port=3D1}', managed=3D'false', plugged=3D'true', readOnly=3D'false', de=
viceAlias=3D'channel0', customProperties=3D'[]', snapshotId=3D'null', l=
ogicalName=3D'null', hostDevice=3D'null'}, device=5Ffbddd528-7d93-49c6-=
a286-180e021cb274device=5F879c93ab-4df1-435c-af02-565039fcc254device=5F=
8945f61a-abbe-4156-8485-a4aa6f1908dbdevice=5F017b5e59-01c4-4aac-bf0c-b5=
d9557284d6=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'017b5e59-01c4-4aac=
-bf0c-b5d9557284d6', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', d=
evice=3D'tablet', type=3D'UNKNOWN', bootOrder=3D'0', specParams=3D'[]',=
address=3D'{bus=3D0, type=3Dusb, port=3D1}', managed=3D'false', plugge=
d=3D'true', readOnly=3D'false', deviceAlias=3D'input0', customPropertie=
s=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'null=
'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'fbddd528-7d93-49c6-a286-180e021cb274', vmId=3D'3f5=
7e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'ide', type=3D'CONTROLLE=
R', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D=
0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false',=
plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customPrope=
rties=3D'[]', snapshotId=3D'null', logicalName=3D'null', hostDevice=3D'=
null'}, device=5Ffbddd528-7d93-49c6-a286-180e021cb274device=5F879c93ab-=
4df1-435c-af02-565039fcc254device=5F8945f61a-abbe-4156-8485-a4aa6f1908d=
b=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'8945f61a-abbe-4156-8485-a4a=
a6f1908db', vmId=3D'3f57e669-5e4c-4d10-85cc-d573004a099d'}', device=3D'=
unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D=
'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', managed=3D=
'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel1'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', h=
ostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D8192, smpCoresPerSocket=3D=
1, vmName=3DOracle=5FSECONDARY, nice=3D0, status=3DUp, maxMemSize=3D327=
68, bootMenuEnable=3Dfalse, vmId=3D3f57e669-5e4c-4d10-85cc-d573004a099d=
, numOfIoThreads=3D2, smpThreadsPerCore=3D1, smartcardEnable=3Dfalse, m=
axMemSlots=3D16, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayNetwo=
rk=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@286410fd, memGuaranteedSi=
ze=3D8192, maxVCpus=3D16, clientIp=3D, statusTime=3D4304263620, display=
=3Dvnc}], log id: 58cdef4c<br />2018-02-12 16:46:46,267+01 INFO [=
org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (Defaul=
tQuartzScheduler5) [7fcb200a] Received a vnc Device without an address =
when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skippi=
ng device: {device=3Dvnc, specParams=3D{displayNetwork=3Dovirtmgmt, key=
Map=3Dfr, displayIp=3D192.168.0.5}, type=3Dgraphics, deviceId=3D813957b=
1-446a-4e88-9e40-9fe76d2c442d, port=3D5901}<br />2018-02-12 16:46:46,26=
8+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMo=
nitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device =
without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a0=
99d devices, skipping device: {lease=5Fid=3D3f57e669-5e4c-4d10-85cc-d57=
3004a099d, sd=5Fid=3D1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId=3D{=
uuid=3Da09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=3D6291456, device=3D=
lease, path=3D/rhev/data-center/mnt/glusterSD/192.168.0.6:=5FDATA01/1e5=
1cecc-eb2e-47d0-b185-920fdc7afa16/dom=5Fmd/xleases, type=3Dlease}<p>&nb=
sp;</p></blockquote><br />For the VM with 2 vdisks we see :<blockquote>=
<p>2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.Mig=
rateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838=
dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks=3D'[f7d4ec=
12-627a-4b83-b59e-886400d55474=3DVM]', sharedLocks=3D''}'<br />2018-02-=
12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToSer=
verCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8f=
e-8b838dd7458e] Running command: MigrateVmToServerCommand internal: fal=
se. Entities affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 =
Type: VMAction group MIGRATE=5FVM with role type USER<br />2018-02-12 1=
6:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCo=
mmand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b8=
38dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAs=
ync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858db285cf69', vmId=3D'=
f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'192.168.0.5', dstVdsI=
d=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=3D'192.168.0.6:5432=
1', migrationMethod=3D'ONLINE', tunnelMigration=3D'false', migrationDow=
ntime=3D'0', autoConverge=3D'true', migrateCompressed=3D'false', consol=
eAddress=3D'null', maxBandwidth=3D'500', enableGuestEvents=3D'true', ma=
xIncomingMigrations=3D'2', maxOutgoingMigrations=3D'2', convergenceSche=
dule=3D'[init=3D[{name=3DsetDowntime, params=3D[100]}], stalling=3D[{li=
mit=3D1, action=3D{name=3DsetDowntime, params=3D[150]}}, {limit=3D2, ac=
tion=3D{name=3DsetDowntime, params=3D[200]}}, {limit=3D3, action=3D{nam=
e=3DsetDowntime, params=3D[300]}}, {limit=3D4, action=3D{name=3DsetDown=
time, params=3D[400]}}, {limit=3D6, action=3D{name=3DsetDowntime, param=
s=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, params=3D[]}}]]'}), l=
og id: 3702a9e0<br />2018-02-12 16:49:06,713+01 INFO [org.ovirt.e=
ngine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thre=
ad.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, Migr=
ateBrokerVDSCommand(HostName =3D ginger.local.systea.fr, MigrateVDSComm=
andParameters:{runAsync=3D'true', hostId=3D'd569c2dd-8f30-4878-8aea-858=
db285cf69', vmId=3D'f7d4ec12-627a-4b83-b59e-886400d55474', srcHost=3D'1=
92.168.0.5', dstVdsId=3D'ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=
=3D'192.168.0.6:54321', migrationMethod=3D'ONLINE', tunnelMigration=3D'=
false', migrationDowntime=3D'0', autoConverge=3D'true', migrateCompress=
ed=3D'false', consoleAddress=3D'null', maxBandwidth=3D'500', enableGues=
tEvents=3D'true', maxIncomingMigrations=3D'2', maxOutgoingMigrations=3D=
'2', convergenceSchedule=3D'[init=3D[{name=3DsetDowntime, params=3D[100=
]}], stalling=3D[{limit=3D1, action=3D{name=3DsetDowntime, params=3D[15=
0]}}, {limit=3D2, action=3D{name=3DsetDowntime, params=3D[200]}}, {limi=
t=3D3, action=3D{name=3DsetDowntime, params=3D[300]}}, {limit=3D4, acti=
on=3D{name=3DsetDowntime, params=3D[400]}}, {limit=3D6, action=3D{name=3D=
setDowntime, params=3D[500]}}, {limit=3D-1, action=3D{name=3Dabort, par=
ams=3D[]}}]]'}), log id: 1840069c<br />2018-02-12 16:49:06,724+01 INFO =
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSComman=
d] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd=
7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c<br />2018-02-1=
2 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVD=
SCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-=
8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id:=
3702a9e0<br />2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.=
core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.=
pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT=5FID: VM=
=5FMIGRATION=5FSTART(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838=
dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null=
, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM:=
Oracle=5FPRIMARY, Source: ginger.local.systea.fr, Destination: victor.=
local.systea.fr, User: admin@internal-authz).<br />...<br />2018-02-12 =
16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.=
VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VM=
s from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'<br />2018-02-12 16:49=
:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAna=
lyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e=
-886400d55474'(Oracle=5FPRIMARY) was unexpectedly detected as 'Migratin=
gTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.=
fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69')<br />2018-02-1=
2 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitorin=
g.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b=
83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d=
7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migr=
ation is done<br />...<br />2018-02-12 16:49:31,484+01 INFO [org.=
ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzSchedu=
ler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle=5FPRI=
MARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-=
4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-=
8f30-4878-8aea-858db285cf69')<br />2018-02-12 16:49:31,484+01 INFO &nbs=
p;[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuart=
zScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is mi=
grating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.syst=
ea.fr) ignoring it in the refresh until migration is done<br /> </=
p></blockquote><br />and so on, last lines repeated indefinitly for hou=
rs since we poweroff the VM...<br />Is this something known ? Any idea =
about that ?<br /><br />Thanks<br /><br />Ovirt 4.1.6, updated last at =
feb-13. Gluster 3.12.1.<br /><br />--<br /><style type=3D"text/css">.Te=
xt1 {
color: black;
font-size:9pt;
font-family:Verdana;
}
.Text2 {
color: black;
font-size:7pt;
font-family:Verdana;
}</style><p class=3D"Text1">Cordialement,<br /><br /><b>Frank Soyer=
</b></p></html>
------=_=-_OpenGroupware_org_NGMime-29680-1518600202.220070-147--------
3
10
Hi, I'm trying to setup sso on Windows 10, vm is domain joined, has
agent installed and credential provider registered.Of course I setup an
AD domain and the vm has sso enabled
Whenever I log to the user portal and open a VM I'm presented with the
login screen and nothing happens, it's like the engine doesn't send the
command to autologin.
In the agent logs there's nothing interesting but the communication
between the engine and the agent is ok: for example the command to
lock-screen on console close runs and works:
Dummy-2::INFO::2018-03-01
09:01:39,124::ovirtagentlogic::322::root::Received an external command:
lock-screen...
This is an extract from engine logs when I login in the user portal and
start a connection:
2018-03-01 11:30:01,558+01 INFO
[org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-30)
[] User c.mammoli(a)apra.it successfully logged in with scopes:
ovirt-app-admin ovirt-app-api ovirt-app-portal
ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all
ovirt-ext=token-info:authz-search
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
ovirt-ext=token:password-access
2018-03-01 11:30:01,606+01 INFO
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
task-31) [7bc265f] Running command: CreateUserSessionCommand internal:
false.
2018-03-01 11:30:01,623+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-31) [7bc265f] EVENT_ID: USER_VDC_LOGIN(30), User
c.mammoli@apra.it@apra.it connecting from '192.168.1.100' using session
'5NMjCbUiehNLAGMeeWsr4L5TatL+uUGsNHOxQtCvSa9i0DaQ7uoGSi6zaZdXu08vrEk5gyQUJAsB2+COzLwtEw=='
logged in.
2018-03-01 11:30:02,163+01 ERROR
[org.ovirt.engine.core.bll.GetSystemStatisticsQuery] (default task-39)
[14276418-5de7-44a6-bb64-c60965de0acf] Query execution failed due to
insufficient permissions.
2018-03-01 11:30:02,664+01 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-54)
[617f130b] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: c0250fe0-5d8b-44de-82bc-04610952f453 Type: VMAction
group CONNECT_TO_VM with role type USER
2018-03-01 11:30:02,683+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-54) [617f130b] START, SetVmTicketVDSCommand(HostName =
r630-01.apra.it,
SetVmTicketVDSCommandParameters:{hostId='d99a8356-72e8-4130-a1cc-e148762eca57',
vmId='c0250fe0-5d8b-44de-82bc-04610952f453', protocol='SPICE',
ticket='u2b1nv+rH+pw', validTime='120', userName='c.mammoli(a)apra.it',
userId='39f9d718-6e65-456a-8a6f-71976bcbbf2f',
disconnectAction='LOCK_SCREEN'}), log id: 18fa2ef
2018-03-01 11:30:02,703+01 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(default task-54) [617f130b] FINISH, SetVmTicketVDSCommand, log id: 18fa2ef
2018-03-01 11:30:02,713+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-54) [617f130b] EVENT_ID: VM_SET_TICKET(164), User
c.mammoli@apra.it@apra.it initiated console session for VM testvdi02
2018-03-01 11:30:11,558+01 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engineScheduled-Thread-49) [] EVENT_ID:
VM_CONSOLE_CONNECTED(167), User c.mammoli(a)apra.it is connected to VM
testvdi02.
Any help would be appreciated
1
0