Change management network
by Chris Boot
Hi all,
I have an oVirt cluster on which I need to change which VLAN is the
management network.
The new management network is an existing VM network. I've configured IP
addresses for all the hosts on this network, and I've even moved the
HostedEngine VM onto this network. So far so good.
What I cannot seem to be able to do is actually change the "management
network" toggle in the cluster to this network: the oVirt Engine
complains saying:
"Error while executing action: Cannot edit Network. Changing management
network in a non-empty cluster is not allowed."
How can I get around this? I clearly cannot empty the cluster, as the
cluster contains all my existing VMs, hosts and HostedEngine.
Best regards,
Chris
--
Chris Boot
bootc(a)boo.tc
6 years, 9 months
oVirt network tries to reassign my bridge address to herself
by Anastasiya Ruzhanskaya
Hello!
I have two VM - they are machines on which I test installation. I don't
want any clusters, advanced features. My goal is two connect engine and
host, shut down everything, then turn on and have right configuration.
However my VMs are connected via bridge, and oVirt also use another bridge
to connect them. Because of this on startup I have a problem, that two
bridges are trying to assign the same address.
What can be done in this case?
6 years, 9 months
oVirt API (4.0 and 4.1) not reporting vms running on a given storage domain
by Luca 'remix_tj' Lorenzetto
Hello,
i need to extract the list of the vms running on a given storage domain.
Copying some code from ansible's ovirt_storage_vms_facts simplified my
work but i stopped with a strange behavior: no vm is listed.
I thought it was an issue with my code, but looking more in detail at
api's i tried opening:
ovirt-engine/api/storagedomains/52b661fe-609e-48f9-beab-f90165b868c4/vms
And what i get is
<vms />
And this for all the storage domains available.
Is there something wrong with the versions i'm running? Do i require
some options in the query?
I'm running RHV, so i can't upgrade to 4.2 yet
Luca
--
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)
"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)
Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , <lorenzetto.luca(a)gmail.com>
6 years, 9 months
Issue with deploy HE on another host 4.1
by Krzysztof Wajda
Hi,
I have an issue with Hosted Engine when I try to deploy via gui on another
host. There is no errors after deploy but in GUI I see only "Not active"
status HE, and hosted-engine --status shows only 1 node (on both nodes same
output). In hosted-engine.conf I see that host_id is the same as it is on
primary host with HE !? Issue looks quite similar like in
http://lists.ovirt.org/pipermail/users/2018-February/086932.html
Here is config file on newly deployed node :
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
gateway=192.168.8.1
iqn=
conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea
connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044
conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8
user=
host_id=1
bridge=ovirtmgmt
metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67
spUUID=00000000-0000-0000-0000-000000000000
mnt_options=
fqdn=dev-ovirtengine0.somedomain.it
portal=
vm_disk_id=febde231-92cc-4599-8f55-816f63132739
metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830
vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58
domainType=fc
port=
console=vnc
ca_subject="C=EN, L=Test, O=Test, CN=Test"
password=
vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76
lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728
lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a
vdsm_use_ssl=true
storage=None
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
This is original one:
fqdn=dev-ovirtengine0.somedomain.it
vm_disk_id=febde231-92cc-4599-8f55-816f63132739
vm_disk_vol_id=e3920b18-4467-44f8-b2d0-629b3b1d1a58
vmid=3f7d9c1d-6c3e-4b96-b85d-d240f3bf9b76
storage=None
mnt_options=
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=fc
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=7e7a275c-6939-4f79-85f6-d695209951ea
connectionUUID=81a2f9a3-2efe-448f-b305-e22543068044
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.8.1
bridge=ovirtmgmt
metadata_volume_UUID=7ebaf268-15ec-4c76-ba89-b5e2dc143830
metadata_image_UUID=fe95f22e-b468-4adf-a754-21d419ae3e67
lockspace_volume_UUID=91bcb5cf-006c-42b4-b419-6ac9f841f50a
lockspace_image_UUID=49e318ad-63a3-4efd-977c-33b8c4c93728
conf_volume_UUID=d6b7e25c-9912-47ff-b104-9d424b9f34b8
conf_image_UUID=f2813205-4b0c-45f3-a9cb-3748f61d2194
# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=
Packages:
ovirt-imageio-daemon-1.0.0-1.el7.noarch
ovirt-host-deploy-1.6.7-1.el7.centos.noarch
ovirt-release41-4.1.9-1.el7.centos.noarch
ovirt-setup-lib-1.1.4-1.el7.centos.noarch
ovirt-hosted-engine-ha-2.1.8-1.el7.centos.noarch
ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch
ovirt-vmconsole-1.0.4-1.el7.centos.noarch
ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
ovirt-imageio-common-1.0.0-1.el7.noarch
Output from agent.log
MainThread::INFO::2018-03-02
15:01:47,279::brokerlink::141::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor)
Success, id 140493346760912
MainThread::INFO::2018-03-02
15:01:51,011::brokerlink::179::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(set_storage_domain)
Success, id 140493346759824
MainThread::INFO::2018-03-02
15:01:51,011::hosted_engine::601::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker)
Broker initialized, all submonitors started
MainThread::INFO::2018-03-02
15:01:51,045::hosted_engine::704::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file:
/var/run/vdsm/storage/7e7a275c-6939-4f79-85f6-d695209951ea/49e318ad-63a3-4efd-977c-33b8c4c93728/91bcb5cf-006c-42b4-b419-6ac9f841f50a)
MainThread::INFO::2018-03-02
15:04:12,058::hosted_engine::745::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock)
Failed to acquire the lock. Waiting '5's before the next attempt
Regards
Krzysztof
6 years, 9 months
hosted-engine --deploy : Failed executing ansible-playbook
by geomeid@mairie-saint-ouen.fr
--=_2226aaa286bc845bd4ebc501b5a092b0
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=UTF-8
Hello,
I am on CENTOS 7.4.1708
I follow the documentation:
[root@srvvm42 ~]# yum install ovirt-hosted-engine-setup
[root@srvvm42 ~]# yum info ovirt-hosted-engine-setup
Loaded plugins:
fastestmirror, package_upload, product-id, search-disabled-repos,
subscription-manager
This system is not registered with an entitlement
server. You can use subscription-manager to register.
Loading mirror
speeds from cached hostfile
* base: centos.quelquesmots.fr
* epel:
mirrors.ircam.fr
* extras: centos.mirrors.ovh.net
* ovirt-4.2:
ftp.nluug.nl
* ovirt-4.2-epel: mirrors.ircam.fr
* updates:
centos.mirrors.ovh.net
Installed Packages
Name :
ovirt-hosted-engine-setup
Arch : noarch
Version : 2.2.9
Release :
1.el7.centos
Size : 2.3 M
Repo : installed
>From repo : ovirt-4.2
Summary
: oVirt Hosted Engine setup tool
URL : http://www.ovirt.org
License :
LGPLv2+
Description : Hosted Engine setup tool for oVirt
project.
[root@srvvm42 ~]# hosted-engine --deploy
I encounter an issue
when i try to install my hosted-engine. Here is the last line of the
installation:
....
[ INFO ] TASK [Clean /etc/hosts for the engine
VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Copy /etc/hosts back
to the engine VM]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [Clean
/etc/hosts on the host]
[ INFO ] changed: [localhost]
[ INFO ] TASK [Add
an entry in /etc/hosts for the target VM]
[ INFO ] changed:
[localhost]
[ INFO ] TASK [Start broker]
[ INFO ] changed: [localhost]
[
INFO ] TASK [Initialize lockspace volume]
[ INFO ] changed:
[localhost]
[ INFO ] TASK [Start agent]
[ INFO ] changed: [localhost]
[
INFO ] TASK [Wait for the engine to come up on the target VM]
[ ERROR ]
fatal: [localhost]: FAILED! => {"msg": "The conditional check
'health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"):
No first item, sequence was empty."}
[ ERROR ] Failed to execute stage
'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean
up
[ INFO ] Cleaning temporary resources
[ INFO ] TASK [Gathering
Facts]
[ INFO ] ok: [localhost]
[ INFO ] TASK [Remove local vm dir]
[
INFO ] changed: [localhost]
[ INFO ] Generating answer file
'/var/lib/ovirt-hosted-engine-setup/answers/answers-20180302104441.conf'
[
INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ]
Hosted Engine deployment failed: please check the logs for the issue,
fix accordingly or re-deploy from scratch.
Log file is located at
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log
And
here is a part of the log file:
2018-03-02 10:44:13,760+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 TASK [debug]
2018-03-02
10:44:13,861+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 changed: False
2018-03-02
10:44:13,962+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 result: {'stderr_lines': [],
u'changed': True, u'end': u'2018-03-02 10:44:13.401854', u'stdou
t':
u'', u'cmd': [u'hosted-engine', u'--reinitialize-lockspace',
u'--force'], 'failed': False, 'attempts': 2, u'stderr': u'', u'rc': 0,
u'delta': u'0:00:00.202734', 'stdout_lines': [], u'start': u'2018-03-02
10:44:13.199120'}
2018-03-02 10:44:14,063+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Start agent]
2018-03-02
10:44:14,565+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 changed: [localhost]
2018-03-02
10:44:14,667+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Wait for the engine to come up
on the target VM]
2018-03-02 10:44:36,555+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 {u'msg': u'The conditional check
'health_result.rc == 0 and
health_result.stdout|from_json|j
son_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|json_query('*."engine-status"."h
ealth"')|first=="good"):
No first item, sequence was empty.'}
2018-03-02 10:44:36,657+0100 ERROR
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"msg":
"The conditional check 'health_result.rc == 0 and
heal
th_result.stdout|from_json|json_query('*."engine-status"."health"')|first=="good"'
failed. The error was: error while evaluating conditional
(health_result.rc == 0 and
health_result.stdout|from_json|js
on_query('*."engine-status"."health"')|first=="good"):
No first item, sequence was empty."}
2018-03-02 10:44:36,759+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180
ansible-playbook rc: 2
2018-03-02 10:44:36,759+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 57
changed: 20 unreachable: 0 skipped: 3 failed: 1
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [ovirtengine.stouen.local] :
ok: 10 changed: 5 unreachable: 0 skipped: 0 failed: 0
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:187 ansible-playbook stdout:
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:189 to retry, use: --limit
@/usr/share/ovirt-hosted-engine-setup/ansible/create_target_vm.retry
2018-03-02
10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:190 ansible-playbook stderr:
2018-03-02
10:44:36,761+0100 DEBUG otopi.context context._executeMethod:143 method
exception
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py",
line 193, in _closeup
r = ah.run()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py",
line 194, in run
raise RuntimeError(_('Failed executing
ansible-playbook'))
RuntimeError: Failed executing
ansible-playbook
2018-03-02 10:44:36,763+0100 ERROR otopi.context
context._executeMethod:152 Failed to execute stage 'Closing up': Failed
executing ansible-playbook
2018-03-02 10:44:36,764+0100 DEBUG
otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP -
BEGIN
2018-03-02 10:44:36,765+0100 DEBUG otopi.context
context.dumpEnvironment:869 ENV BASE/error=bool:'True'
2018-03-02
10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:869 ENV
BASE/exceptionInfo=list:'[(<type 'exceptions.RuntimeError'>,
RuntimeError('Failed executing ansible-playbook',), <traceback ob
ject
at 0x3122fc8>)]'
2018-03-02 10:44:36,766+0100 DEBUG otopi.context
context.dumpEnvironment:873 ENVIRONMENT DUMP - END
2018-03-02
10:44:36,767+0100 INFO otopi.context context.runSequence:741 Stage:
Clean up
2018-03-02 10:44:36,767+0100 DEBUG otopi.context
context.runSequence:745 STAGE cleanup
2018-03-02 10:44:36,768+0100 DEBUG
otopi.context context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_ansiblesetup.core.misc.Plugin._cleanup
2018-03-02
10:44:36,769+0100 INFO otopi.plugins.gr_he_ansiblesetup.core.misc
misc._cleanup:236 Cleaning temporary resources
2018-03-02
10:44:36,769+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:153 ansible-playbook: cmd: ['/bin/ansible-playbook',
'--module-path=/usr/share/ovirt-hosted-engine-setup/ans
ible',
'--inventory=localhost,', '--extra-vars=@/tmp/tmpCctJN4',
'/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml']
2018-03-02
10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:154 ansible-playbook: out_path:
/tmp/tmpBm1bE0
2018-03-02 10:44:36,770+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:155
ansible-playbook: vars_path: /tmp/tmpCctJN4
2018-03-02 10:44:36,770+0100
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils.run:156 ansible-playbook: env: {'LC_NUMERIC':
'fr_FR.UTF-8', 'HE_ANSIBLE_LOG_PATH':
'/var/log/ovirt-hosted-engin
e-setup/ovirt-hosted-engine-setup-ansible-final_clean-20180302104436-0yt7bk.log',
'LESSOPEN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.2.10.112
38120 22', 'SELINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'r
oot', 'USER':
'root', 'HOME': '/root', 'LC_PAPER': 'fr_FR.UTF-8', 'PATH':
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/root/bin',
'LANG': 'en_US.UTF-8',
'TERM': 'xterm-256color', 'SHELL': '/bin/bash',
'LC_MEASUREMENT': 'fr_FR.UTF-8', 'HISTSIZE': '1000',
'OTOPI_CALLBACK_OF': '/tmp/tmpBm1bE0', 'LC_MONETARY': 'fr_FR.UTF-8',
'XDG_RUNTIME_DIR': '/run/user/0', 'AN
SIBLE_STDOUT_CALLBACK':
'1_otopi_json', 'LC_ADDRESS': 'fr_FR.UTF-8', 'PYTHONPATH':
'/usr/share/ovirt-hosted-engine-setup/scripts/..:',
'SELINUX_ROLE_REQUESTED': '', 'MAIL': '/var/spool/mail/root',
'ANSIBLE_C
ALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger',
'XDG_SESSION_ID': '1', 'LC_IDENTIFICATION': 'fr_FR.UTF-8', 'LS_COLORS':
'rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5
;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar
=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38
;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;
9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.t
ga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13
:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;
13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=3
8;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=
38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:',
'SSH_TTY': '/dev/pts/0', 'H
OSTNAME': 'srvvm42.stouen.local',
'LC_TELEPHONE': 'fr_FR.UTF-8', 'SELINUX_LEVEL_REQUESTED': '',
'HISTCONTROL': 'ignoredups', 'SHLVL': '1', 'PWD': '/root', 'LC_NAME':
'fr_FR.UTF-8', 'OTOPI_LOGFILE':
'/var/log
/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log',
'LC_TIME': 'fr_FR.UTF-8', 'SSH_CONNECTION': '10.2.10.112 38120
10.2.200.130 22', 'OTOPI_EXECDIR': '/root'}
2018-03-02 10:44:37,885+0100
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY [Clean temporary
resources]
2018-03-02 10:44:37,987+0100 INFO
otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Gathering Facts]
2018-03-02
10:44:40,098+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 ok: [localhost]
2018-03-02
10:44:40,300+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 TASK [Remove local vm dir]
2018-03-02
10:44:41,105+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:100 changed: [localhost]
2018-03-02
10:44:41,206+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils
ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 2 changed:
1 unreachable: 0 skipped: 0 failed: 0
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180
ansible-playbook rc: 0
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187
ansible-playbook stdout:
2018-03-02 10:44:41,307+0100 DEBUG
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190
ansible-playbook stderr:
2018-03-02 10:44:41,308+0100 DEBUG
otopi.plugins.gr_he_ansiblesetup.core.misc misc._cleanup:238
{}
2018-03-02 10:44:41,311+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.engine.ca.Plugin._cleanup
2018-03-02
10:44:41,312+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,315+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup
2018-03-02
10:44:41,315+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,318+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.gr_he_common.vm.cloud_init.Plugin._cleanup
2018-03-02
10:44:41,319+0100 DEBUG otopi.context context._executeMethod:135
condition False
2018-03-02 10:44:41,320+0100 DEBUG otopi.context
context._executeMethod:128 Stage cleanup METHOD
otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file
2018-03-02
10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironment:859
ENVIRONMENT DUMP - BEGIN
2018-03-02 10:44:41,320+0100 DEBUG
otopi.context context.dumpEnvironment:869 ENV
DIALOG/answerFileContent=str:'# OTOPI answer file, generated by human
dialog
[environment:default]
QUESTION/1/CI_VM_ETC_HOST=str:yes
--
I
don't understand the problem. I searched on the web and found nothing.
I'm using an other ovirt in a test environement from a while, but here
i'm really lost...
Thanks for any help !
Georges
--=_2226aaa286bc845bd4ebc501b5a092b0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html; charset=UTF-8
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body style=3D'font-family: Verdana,Geneva,sans-serif'>
<p>Hello,</p>
<p> </p>
<p>I am on CENTOS 7.4.1708</p>
<p> </p>
<p>I follow the documentation:</p>
<p>[root@srvvm42 ~]# yum install ovirt-hosted-engine-setup</p>
<p>[root@srvvm42 ~]# yum info ovirt-hosted-engine-setup<br />Loaded plugins=
: fastestmirror, package_upload, product-id, search-disabled-repos, subscri=
ption-manager<br />This system is not registered with an entitlement server=
=2E You can use subscription-manager to register.<br />Loading mirror speed=
s from cached hostfile<br /> * base: centos.quelquesmots.fr<br /> =
;* epel: mirrors.ircam.fr<br /> * extras: centos.mirrors.ovh.net<br />=
* ovirt-4.2: ftp.nluug.nl<br /> * ovirt-4.2-epel: mirrors.ircam=
=2Efr<br /> * updates: centos.mirrors.ovh.net<br />Installed Packages<=
br />Name : ovirt-hosted-engine-s=
etup<br />Arch : noarch<br />Vers=
ion : 2.2.9<br />Release : =
1.el7.centos<br />Size : 2.3 M<br=
/>Repo : installed<br />From rep=
o : ovirt-4.2<br />Summary : oVirt Host=
ed Engine setup tool<br />URL &nbs=
p; : http://www.ovirt.org<br />License : LGPLv2+<br=
/>Description : Hosted Engine setup tool for oVirt project.<br /><br /><br=
/>[root@srvvm42 ~]# hosted-engine --deploy<br /><br /></p>
<p>I encounter an issue when i try to install my hosted-engine. Here is the=
last line of the installation:</p>
<p>....</p>
<p>[ INFO ] TASK [Clean /etc/hosts for the engine VM]<br />[ INFO&nbs=
p; ] skipping: [localhost]<br />[ INFO ] TASK [Copy /etc/hosts back t=
o the engine VM]<br />[ INFO ] skipping: [localhost]<br />[ INFO =
; ] TASK [Clean /etc/hosts on the host]<br />[ INFO ] changed: [local=
host]<br />[ INFO ] TASK [Add an entry in /etc/hosts for the target V=
M]<br />[ INFO ] changed: [localhost]<br />[ INFO ] TASK [Start=
broker]<br />[ INFO ] changed: [localhost]<br />[ INFO ] TASK =
[Initialize lockspace volume]<br />[ INFO ] changed: [localhost]<br /=
>[ INFO ] TASK [Start agent]<br />[ INFO ] changed: [localhost]=
<br />[ INFO ] TASK [Wait for the engine to come up on the target VM]=
<br />[ ERROR ] fatal: [localhost]: FAILED! =3D> {"msg": "The conditiona=
l check 'health_result.rc =3D=3D 0 and health_result.stdout|from_json|json_=
query('*.\"engine-status\".\"health\"')|first=3D=3D\"good\"' failed. The er=
ror was: error while evaluating conditional (health_result.rc =3D=3D 0 and =
health_result.stdout|from_json|json_query('*.\"engine-status\".\"health\"')=
|first=3D=3D\"good\"): No first item, sequence was empty."}<br />[ ERROR ] =
Failed to execute stage 'Closing up': Failed executing ansible-playbook<br =
/>[ INFO ] Stage: Clean up<br />[ INFO ] Cleaning temporary res=
ources<br />[ INFO ] TASK [Gathering Facts]<br />[ INFO ] ok: [=
localhost]<br />[ INFO ] TASK [Remove local vm dir]<br />[ INFO =
] changed: [localhost]<br />[ INFO ] Generating answer file '/var/li=
b/ovirt-hosted-engine-setup/answers/answers-20180302104441.conf'<br />[ INF=
O ] Stage: Pre-termination<br />[ INFO ] Stage: Termination<br =
/>[ ERROR ] Hosted Engine deployment failed: please check the logs for the =
issue, fix accordingly or re-deploy from scratch.<br /> &n=
bsp; Log file is located at /var/log/ovirt-ho=
sted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzcop.log<br />=
<br /></p>
<p> </p>
<p>And here is a part of the log file:</p>
<p>2018-03-02 10:44:13,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansib=
le_utils ansible_utils._process_output:94 TASK [debug]<br />2018-03-02 10:4=
4:13,861+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_u=
tils._process_output:94 changed: False<br />2018-03-02 10:44:13,962+0100 DE=
BUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_ou=
tput:94 result: {'stderr_lines': [], u'changed': True, u'end': u'2018-03-02=
10:44:13.401854', u'stdou<br />t': u'', u'cmd': [u'hosted-engine', u'--rei=
nitialize-lockspace', u'--force'], 'failed': False, 'attempts': 2, u'stderr=
': u'', u'rc': 0, u'delta': u'0:00:00.202734', 'stdout_lines': [], u'start'=
: u'2018-03-02<br /> 10:44:13.199120'}<br />2018-03-02 10:44:14,063+01=
00 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proces=
s_output:100 TASK [Start agent]<br />2018-03-02 10:44:14,565+0100 INFO otop=
i.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100=
changed: [localhost]<br />2018-03-02 10:44:14,667+0100 INFO otopi.ovirt_ho=
sted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wai=
t for the engine to come up on the target VM]<br />2018-03-02 10:44:36,555+=
0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._pro=
cess_output:94 {u'msg': u'The conditional check \'health_result.rc =3D=3D 0=
and health_result.stdout|from_json|j<br />son_query(\'*."engine-status"."h=
ealth"\')|first=3D=3D"good"\' failed. The error was: error while evaluating=
conditional (health_result.rc =3D=3D 0 and health_result.stdout|from_json|=
json_query(\'*."engine-status"."h<br />ealth"\')|first=3D=3D"good"): No fir=
st item, sequence was empty.'}<br />2018-03-02 10:44:36,657+0100 ERROR otop=
i.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 =
fatal: [localhost]: FAILED! =3D> {"msg": "The conditional check 'health_=
result.rc =3D=3D 0 and heal<br />th_result.stdout|from_json|json_query('*=
=2E\"engine-status\".\"health\"')|first=3D=3D\"good\"' failed. The error wa=
s: error while evaluating conditional (health_result.rc =3D=3D 0 and health=
_result.stdout|from_json|js<br />on_query('*.\"engine-status\".\"health\"')=
|first=3D=3D\"good\"): No first item, sequence was empty."}<br />2018-03-02=
10:44:36,759+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansi=
ble_utils.run:180 ansible-playbook rc: 2<br />2018-03-02 10:44:36,759+0100 =
DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_=
output:94 PLAY RECAP [localhost] : ok: 57 changed: 20 unreachable: 0 skippe=
d: 3 failed: 1<br />2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_e=
ngine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovir=
tengine.stouen.local] : ok: 10 changed: 5 unreachable: 0 skipped: 0 failed:=
0<br />2018-03-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup=
=2Eansible_utils ansible_utils.run:187 ansible-playbook stdout:<br />2018-0=
3-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils =
ansible_utils.run:189 to retry, use: --limit @/usr/share=
/ovirt-hosted-engine-setup/ansible/create_target_vm.retry<br /><br />2018-0=
3-02 10:44:36,760+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils =
ansible_utils.run:190 ansible-playbook stderr:<br />2018-03-02 10:44:36,761=
+0100 DEBUG otopi.context context._executeMethod:143 method exception<br />=
Traceback (most recent call last):<br /> File "/usr/lib/python2.7/sit=
e-packages/otopi/context.py", line 133, in _executeMethod<br /> =
method['method']()<br /> File "/usr/share/ovirt-hosted-engine-=
setup/scripts/../plugins/gr-he-ansiblesetup/core/target_vm.py", line 193, i=
n _closeup<br /> r =3D ah.run()<br /> File "/usr/li=
b/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line=
194, in run<br /> raise RuntimeError(_('Failed executing=
ansible-playbook'))<br />RuntimeError: Failed executing ansible-playbook<b=
r />2018-03-02 10:44:36,763+0100 ERROR otopi.context context._executeMethod=
:152 Failed to execute stage 'Closing up': Failed executing ansible-playboo=
k<br />2018-03-02 10:44:36,764+0100 DEBUG otopi.context context.dumpEnviron=
ment:859 ENVIRONMENT DUMP - BEGIN<br />2018-03-02 10:44:36,765+0100 DEBUG o=
topi.context context.dumpEnvironment:869 ENV BASE/error=3Dbool:'True'<br />=
2018-03-02 10:44:36,765+0100 DEBUG otopi.context context.dumpEnvironment:86=
9 ENV BASE/exceptionInfo=3Dlist:'[(<type 'exceptions.RuntimeError'>, =
RuntimeError('Failed executing ansible-playbook',), <traceback ob<br />j=
ect at 0x3122fc8>)]'<br />2018-03-02 10:44:36,766+0100 DEBUG otopi.conte=
xt context.dumpEnvironment:873 ENVIRONMENT DUMP - END<br />2018-03-02 10:44=
:36,767+0100 INFO otopi.context context.runSequence:741 Stage: Clean up<br =
/>2018-03-02 10:44:36,767+0100 DEBUG otopi.context context.runSequence:745 =
STAGE cleanup<br />2018-03-02 10:44:36,768+0100 DEBUG otopi.context context=
=2E_executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_ansiblesetup=
=2Ecore.misc.Plugin._cleanup<br />2018-03-02 10:44:36,769+0100 INFO otopi=
=2Eplugins.gr_he_ansiblesetup.core.misc misc._cleanup:236 Cleaning temporar=
y resources<br />2018-03-02 10:44:36,769+0100 DEBUG otopi.ovirt_hosted_engi=
ne_setup.ansible_utils ansible_utils.run:153 ansible-playbook: cmd: ['/bin/=
ansible-playbook', '--module-path=3D/usr/share/ovirt-hosted-engine-setup/an=
s<br />ible', '--inventory=3Dlocalhost,', '--extra-vars=3D@/tmp/tmpCctJN4',=
'/usr/share/ovirt-hosted-engine-setup/ansible/final_clean.yml']<br />2018-=
03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils=
ansible_utils.run:154 ansible-playbook: out_path: /tmp/tmpBm1bE0<br />2018=
-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_util=
s ansible_utils.run:155 ansible-playbook: vars_path: /tmp/tmpCctJN4<br />20=
18-03-02 10:44:36,770+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_ut=
ils ansible_utils.run:156 ansible-playbook: env: {'LC_NUMERIC': 'fr_FR.UTF-=
8', 'HE_ANSIBLE_LOG_PATH': '/var/log/ovirt-hosted-engin<br />e-setup/ovirt-=
hosted-engine-setup-ansible-final_clean-20180302104436-0yt7bk.log', 'LESSOP=
EN': '||/usr/bin/lesspipe.sh %s', 'SSH_CLIENT': '10.2.10.112 38120 22', 'SE=
LINUX_USE_CURRENT_RANGE': '', 'LOGNAME': 'r<br />oot', 'USER': 'root', 'HOM=
E': '/root', 'LC_PAPER': 'fr_FR.UTF-8', 'PATH': '/usr/local/sbin:/usr/local=
/bin:/usr/sbin:/usr/bin:/opt/dell/srvadmin/bin:/opt/dell/srvadmin/sbin:/roo=
t/bin', 'LANG': 'en_US.UTF-8', <br />'TERM': 'xterm-256color', 'SHELL': '/b=
in/bash', 'LC_MEASUREMENT': 'fr_FR.UTF-8', 'HISTSIZE': '1000', 'OTOPI_CALLB=
ACK_OF': '/tmp/tmpBm1bE0', 'LC_MONETARY': 'fr_FR.UTF-8', 'XDG_RUNTIME_DIR':=
'/run/user/0', 'AN<br />SIBLE_STDOUT_CALLBACK': '1_otopi_json', 'LC_ADDRES=
S': 'fr_FR.UTF-8', 'PYTHONPATH': '/usr/share/ovirt-hosted-engine-setup/scri=
pts/..:', 'SELINUX_ROLE_REQUESTED': '', 'MAIL': '/var/spool/mail/root', 'AN=
SIBLE_C<br />ALLBACK_WHITELIST': '1_otopi_json,2_ovirt_logger', 'XDG_SESSIO=
N_ID': '1', 'LC_IDENTIFICATION': 'fr_FR.UTF-8', 'LS_COLORS': 'rs=3D0:di=3D3=
8;5;27:ln=3D38;5;51:mh=3D44;38;5;15:pi=3D40;38;5;11:so=3D38;5;13:do=3D38;5;=
5:bd=3D48;5<br />;232;38;5;11:cd=3D48;5;232;38;5;3:or=3D48;5;232;38;5;9:mi=
=3D05;48;5;232;38;5;15:su=3D48;5;196;38;5;15:sg=3D48;5;11;38;5;16:ca=3D48;5=
;196;38;5;226:tw=3D48;5;10;38;5;16:ow=3D48;5;10;38;5;21:st=3D48;5;21;38;5;1=
5:ex=3D38;5;34:*.tar<br />=3D38;5;9:*.tgz=3D38;5;9:*.arc=3D38;5;9:*.arj=3D3=
8;5;9:*.taz=3D38;5;9:*.lha=3D38;5;9:*.lz4=3D38;5;9:*.lzh=3D38;5;9:*.lzma=3D=
38;5;9:*.tlz=3D38;5;9:*.txz=3D38;5;9:*.tzo=3D38;5;9:*.t7z=3D38;5;9:*.zip=3D=
38;5;9:*.z=3D38;5;9:*.Z=3D38;5;9:*.dz=3D38<br />;5;9:*.gz=3D38;5;9:*.lrz=3D=
38;5;9:*.lz=3D38;5;9:*.lzo=3D38;5;9:*.xz=3D38;5;9:*.bz2=3D38;5;9:*.bz=3D38;=
5;9:*.tbz=3D38;5;9:*.tbz2=3D38;5;9:*.tz=3D38;5;9:*.deb=3D38;5;9:*.rpm=3D38;=
5;9:*.jar=3D38;5;9:*.war=3D38;5;9:*.ear=3D38;5;9:*.sar=3D38;5;<br />9:*.rar=
=3D38;5;9:*.alz=3D38;5;9:*.ace=3D38;5;9:*.zoo=3D38;5;9:*.cpio=3D38;5;9:*.7z=
=3D38;5;9:*.rz=3D38;5;9:*.cab=3D38;5;9:*.jpg=3D38;5;13:*.jpeg=3D38;5;13:*=
=2Egif=3D38;5;13:*.bmp=3D38;5;13:*.pbm=3D38;5;13:*.pgm=3D38;5;13:*.ppm=3D38=
;5;13:*.t<br />ga=3D38;5;13:*.xbm=3D38;5;13:*.xpm=3D38;5;13:*.tif=3D38;5;13=
:*.tiff=3D38;5;13:*.png=3D38;5;13:*.svg=3D38;5;13:*.svgz=3D38;5;13:*.mng=3D=
38;5;13:*.pcx=3D38;5;13:*.mov=3D38;5;13:*.mpg=3D38;5;13:*.mpeg=3D38;5;13:*=
=2Em2v=3D38;5;13:*.mkv=3D38;5;<br />13:*.webm=3D38;5;13:*.ogm=3D38;5;13:*=
=2Emp4=3D38;5;13:*.m4v=3D38;5;13:*.mp4v=3D38;5;13:*.vob=3D38;5;13:*.qt=3D38=
;5;13:*.nuv=3D38;5;13:*.wmv=3D38;5;13:*.asf=3D38;5;13:*.rm=3D38;5;13:*.rmvb=
=3D38;5;13:*.flc=3D38;5;13:*.avi=3D38;5;13:*.fli=3D3<br />8;5;13:*.flv=3D38=
;5;13:*.gl=3D38;5;13:*.dl=3D38;5;13:*.xcf=3D38;5;13:*.xwd=3D38;5;13:*.yuv=
=3D38;5;13:*.cgm=3D38;5;13:*.emf=3D38;5;13:*.axv=3D38;5;13:*.anx=3D38;5;13:=
*.ogv=3D38;5;13:*.ogx=3D38;5;13:*.aac=3D38;5;45:*.au=3D38;5;45:*.flac=3D<br=
/>38;5;45:*.mid=3D38;5;45:*.midi=3D38;5;45:*.mka=3D38;5;45:*.mp3=3D38;5;45=
:*.mpc=3D38;5;45:*.ogg=3D38;5;45:*.ra=3D38;5;45:*.wav=3D38;5;45:*.axa=3D38;=
5;45:*.oga=3D38;5;45:*.spx=3D38;5;45:*.xspf=3D38;5;45:', 'SSH_TTY': '/dev/p=
ts/0', 'H<br />OSTNAME': 'srvvm42.stouen.local', 'LC_TELEPHONE': 'fr_FR.UTF=
-8', 'SELINUX_LEVEL_REQUESTED': '', 'HISTCONTROL': 'ignoredups', 'SHLVL': '=
1', 'PWD': '/root', 'LC_NAME': 'fr_FR.UTF-8', 'OTOPI_LOGFILE': '/var/log<br=
/>/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180302101734-fuzc=
op.log', 'LC_TIME': 'fr_FR.UTF-8', 'SSH_CONNECTION': '10.2.10.112 38120 10=
=2E2.200.130 22', 'OTOPI_EXECDIR': '/root'}<br />2018-03-02 10:44:37,885+01=
00 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proce=
ss_output:94 PLAY [Clean temporary resources]<br />2018-03-02 10:44:37,987+=
0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._proc=
ess_output:100 TASK [Gathering Facts]<br />2018-03-02 10:44:40,098+0100 INF=
O otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_outp=
ut:100 ok: [localhost]<br />2018-03-02 10:44:40,300+0100 INFO otopi.ovirt_h=
osted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Re=
move local vm dir]<br />2018-03-02 10:44:41,105+0100 INFO otopi.ovirt_hoste=
d_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [lo=
calhost]<br />2018-03-02 10:44:41,206+0100 DEBUG otopi.ovirt_hosted_engine_=
setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost]=
: ok: 2 changed: 1 unreachable: 0 skipped: 0 failed: 0<br />2018-03-02 10:=
44:41,307+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_=
utils.run:180 ansible-playbook rc: 0<br />2018-03-02 10:44:41,307+0100 DEBU=
G otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansib=
le-playbook stdout:<br />2018-03-02 10:44:41,307+0100 DEBUG otopi.ovirt_hos=
ted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stder=
r:<br />2018-03-02 10:44:41,308+0100 DEBUG otopi.plugins.gr_he_ansiblesetup=
=2Ecore.misc misc._cleanup:238 {}<br />2018-03-02 10:44:41,311+0100 DEBUG o=
topi.context context._executeMethod:128 Stage cleanup METHOD otopi.plugins=
=2Egr_he_common.engine.ca.Plugin._cleanup<br />2018-03-02 10:44:41,312+0100=
DEBUG otopi.context context._executeMethod:135 condition False<br />2018-0=
3-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:128 Stage=
cleanup METHOD otopi.plugins.gr_he_common.vm.boot_disk.Plugin._cleanup<br =
/>2018-03-02 10:44:41,315+0100 DEBUG otopi.context context._executeMethod:1=
35 condition False<br />2018-03-02 10:44:41,318+0100 DEBUG otopi.context co=
ntext._executeMethod:128 Stage cleanup METHOD otopi.plugins.gr_he_common.vm=
=2Ecloud_init.Plugin._cleanup<br />2018-03-02 10:44:41,319+0100 DEBUG otopi=
=2Econtext context._executeMethod:135 condition False<br />2018-03-02 10:44=
:41,320+0100 DEBUG otopi.context context._executeMethod:128 Stage cleanup M=
ETHOD otopi.plugins.otopi.dialog.answer_file.Plugin._generate_answer_file<b=
r />2018-03-02 10:44:41,320+0100 DEBUG otopi.context context.dumpEnvironmen=
t:859 ENVIRONMENT DUMP - BEGIN<br />2018-03-02 10:44:41,320+0100 DEBUG otop=
i.context context.dumpEnvironment:869 ENV DIALOG/answerFileContent=3Dstr:'#=
OTOPI answer file, generated by human dialog<br />[environment:default]<br=
/>QUESTION/1/CI_VM_ETC_HOST=3Dstr:yes<br /><br /></p>
<p> </p>
<div>
<pre>--</pre>
</div>
<p> </p>
<p>I don't understand the problem. I searched on the web and found nothing=
=2E</p>
<p> </p>
<p>I'm using an other ovirt in a test environement from a while, but here i=
'm really lost...</p>
<p> </p>
<p>Thanks for any help !</p>
<p>Georges</p>
</body></html>
--=_2226aaa286bc845bd4ebc501b5a092b0--
6 years, 9 months
upgrade domain V3.6 to V4.2 with disks and snapshots failed on NFS, Export, QCOW V2/V3 renaming probl.
by Oliver Riesener
Hi,
after upgrading cluster compatibility from V3.6 to V4.2,
i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS
and Export Domains.
Non UI command solves this problem, all stuck, they worsen disk state to
"Illegal" and
produces "Async Tasks" these newer ends.
It seems that the disk file on storage domain has been renamed with
addition -NNNNNNNNNNNN
but the QCOW backing file locations in data files are not updated. They
point to old disk names.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la
insgesamt 4983972
drwxr-xr-x. 2 vdsm kvm 4096 28. Feb 12:57 .
drwxr-xr-x. 64 vdsm kvm 4096 1. Mär 12:02 ..
-rw-rw----. 1 vdsm kvm 53687091200 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.lease
-rw-r--r--. 1 vdsm kvm 319 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.meta
-rw-rw----. 1 vdsm kvm 966393856 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.lease
-rw-r--r--. 1 vdsm kvm 264 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.meta
-rw-rw----. 1 vdsm kvm 2155806720 14. Feb 11:53
67f96ffc-3a4f-4f3d-9c1b-46293e0be762
-rw-rw----. 1 vdsm kvm 1048576 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease
-rw-r--r--. 1 vdsm kvm 260 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file *
239c0ffc-8249-4d08-967a-619abbbb897a: x86 boot sector; partition
1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code
offset 0xb8
239c0ffc-8249-4d08-967a-619abbbb897a.lease: data
239c0ffc-8249-4d08-967a-619abbbb897a.meta: ASCII text
2f773536-9b60-4f53-b179-dbf64d182a41: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a),
53687091200 bytes
2f773536-9b60-4f53-b179-dbf64d182a41.lease: data
2f773536-9b60-4f53-b179-dbf64d182a41.meta: ASCII text
67f96ffc-3a4f-4f3d-9c1b-46293e0be762: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179),
53687091200 bytes
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta: ASCII text
My solution is, to hard-link the disk files to old names too. Then the
disk could be handled by UI again.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179
To fix the illegal disk state, i manipulate the postgres database
directly, thanx to ovirt-users mailing list.
Rescan of DIsks in the UI could also work, i will test it in the
evening, i have a lot of old exported disks with snapshots ...
Is there a smarter way to do it ?
Cheers!
Olri
6 years, 9 months
VM Migrations
by Bryan Sockel
Hi,
I am having an issue migrating all vm's based on a specific template. The
template was created in a previous ovirt environment (4.1), and all VM's
deployed from this template experience the same issue.
I would like to find a resolution to both the template and vm's that are
already deployed from this template. The VM in question is VDI-Bryan and
the migration starts around 12:25. I have attached the engine.log and the
vdsm.log file from the destination server.
Thanks
Bryan
6 years, 9 months
upgrading disks with snapshots from V3.6 domain failed on NFS and Export domain V4, QCOW V2 to V3 renaming
by Oliver Riesener
Hi,
after upgrading cluster compatibility from V3.6 to V4.2,
i found all V3.6 disks with *snapshots* QCOW V2 are not working on NFS
and Export Domains.
Non UI command solves this problem, all stuck, they worsen disk state to
"Illegal" and
produces "Async Tasks" these newer ends.
It seems that the disk file on storage domain has been renamed with
addition -NNNNNNNNNNNN
but the QCOW backing file locations in data files are not updated. They
point to old disk names.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ls -la
insgesamt 4983972
drwxr-xr-x. 2 vdsm kvm 4096 28. Feb 12:57 .
drwxr-xr-x. 64 vdsm kvm 4096 1. Mär 12:02 ..
-rw-rw----. 1 vdsm kvm 53687091200 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.lease
-rw-r--r--. 1 vdsm kvm 319 5. Sep 2016
239c0ffc-8249-4d08-967a-619abbbb897a.meta
-rw-rw----. 1 vdsm kvm 966393856 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41
-rw-rw----. 1 vdsm kvm 1048576 5. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.lease
-rw-r--r--. 1 vdsm kvm 264 6. Sep 2016
2f773536-9b60-4f53-b179-dbf64d182a41.meta
-rw-rw----. 1 vdsm kvm 2155806720 14. Feb 11:53
67f96ffc-3a4f-4f3d-9c1b-46293e0be762
-rw-rw----. 1 vdsm kvm 1048576 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease
-rw-r--r--. 1 vdsm kvm 260 6. Sep 2016
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# file *
239c0ffc-8249-4d08-967a-619abbbb897a: x86 boot sector; partition
1: ID=0x83, starthead 32, startsector 2048, 104853504 sectors, code
offset 0xb8
239c0ffc-8249-4d08-967a-619abbbb897a.lease: data
239c0ffc-8249-4d08-967a-619abbbb897a.meta: ASCII text
2f773536-9b60-4f53-b179-dbf64d182a41: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/239c0ffc-8249-4d08-967a),
53687091200 bytes
2f773536-9b60-4f53-b179-dbf64d182a41.lease: data
2f773536-9b60-4f53-b179-dbf64d182a41.meta: ASCII text
67f96ffc-3a4f-4f3d-9c1b-46293e0be762: QEMU QCOW Image (v2), has
backing file (path
../706ff176-4f96-42fe-a5fa-56434347f16c/2f773536-9b60-4f53-b179),
53687091200 bytes
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.lease: data
67f96ffc-3a4f-4f3d-9c1b-46293e0be762.meta: ASCII text
My solution is, to hard-link the disk files to old names too. Then the
disk could be handled by UI again.
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
239c0ffc-8249-4d08-967a-619abbbb897a 239c0ffc-8249-4d08-967a
[root@ovn-monster 706ff176-4f96-42fe-a5fa-56434347f16c]# ln
2f773536-9b60-4f53-b179-dbf64d182a41 2f773536-9b60-4f53-b179
To fix the illegal disk state, i manipulate the postgres database
directly, thanx to ovirt-users mailing list.
Rescan of DIsks in the UI could also work, i will test it in the
evening, i have a lot of old exported diskswith snapshots ...
Is there a smarter way to do it ?
Cheers
Olri
6 years, 9 months