On Wed, May 6, 2020 at 11:06 PM kelley bryan <kelley.bryan10(a)gmail.com> wrote:
I am experiencing the error message in the
ovirt-hosted-engine-setup-ansible-create_target_vm log
{2020-05-06 14:15:30,024-0500 ERROR ansible failed {'status': 'FAILED',
'ansible_type': 'task', 'ansible_task': u"Fail if Engine IP
is different from engine's he_fqdn resolved IP", 'ansible_result':
u'type: <type \'dict\'>\nstr: {\'msg\': u"Engine VM IP
address is while the engine\'s he_fqdn
ovirt1-engine.kelleykars.org resolves to
192.168.122.2. If you are using DHCP, check your DHCP reservation configuration",
\'changed\': False, \'_ansible_no_log\': False}',
'task_duration': 1, 'ansible_host': u'localhost',
'ansible_playbook':
u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}}:Q!
The bug 1590266 says it should report the engine VM IP address xxx.xxx.xxx.xxx while the
Engines he_fqdn is xxxxxxxxx
I need to see what it thins is wrong as both dig fqdn engine name and dig -x ip return
the correct information.
Please check/share all relevant logs.
To answer the question in the subject, no, it did not reset. The code
was in ovirt-hosted-engine-setup when that bug was fixed, and the code
later moved to a different project, ovirt-ansible-hosted-engine-setup,
where you can find:
- name: Get target engine VM IP address
shell: getent {{ ip_key }} {{ he_fqdn }} | cut -d' ' -f1 | uniq
environment: "{{ he_cmd_lang }}"
register: engine_vm_ip
changed_when: true
- name: Get VDSM's target engine VM stats
command: vdsm-client VM getStats vmID={{ he_vm_details.vm.id }}
environment: "{{ he_cmd_lang }}"
register: engine_vdsm_stats
changed_when: true
- name: Convert stats to JSON format
set_fact: json_stats={{ engine_vdsm_stats.stdout|from_json }}
- name: Get target engine VM IP address from VDSM stats
set_fact: engine_vm_ip_vdsm={{ json_stats[0].guestIPs }}
- debug: var=engine_vm_ip_vdsm
- name: Fail if Engine IP is different from engine's he_fqdn resolved IP
fail:
msg: >-
Engine VM IP address is {{ engine_vm_ip_vdsm }} while the
engine's he_fqdn {{ he_fqdn }} resolves to
{{ engine_vm_ip.stdout_lines[0] }}. If you are using DHCP,
check your DHCP reservation configuration
when: engine_vm_ip_vdsm != engine_vm_ip.stdout_lines[0]
You should be able to find most of the relevant variable values used
above in your logs, and try to correlate them with reality, and find
out why deploy decided there is a problem and failed. If you think
there is a bug, please open one, and attach all relevant logs. Thanks!
Now this bug looks like it may play but I don't see the failed rediness check in the
this log
https://access.redhat.com/solutions/4462431
This page is accessible only for Red Hat customers, so other people
can't access it. If you are a customer, please discuss its contents
with Red Hat's support, by opening a ticket. Thanks!
That said, there is nothing magical here. If your networking
configuration is 100% consistent, everything should work. This should
include name resolution (both DNS forward (A) and reverse (PTR), or
/etc/hosts), IP/MAC addresses matching, etc. If it's not, things would
eventually likely fail. The bug you mention, and the error in the log
snippet you provided, is simply meant to fail early, if deploy detects
some inconsistency, rather than continue blindly and fail much later.
or is it because the vm fails or dies or ???
Most likely this isn't the case, but you can simply check - do you see
a 'qemu' process running? Can you ssh to the VM? At this stage it
might still have the local, private IP address assigned by libvirt,
which you can find by searching the logs for 'local_vm_ip'.
Best regards,
--
Didi