Hi Dirk,
I'm having some trouble following the configuration.
I have succesfully deployed HostedEngine over a vlan-tagged interface
in the past.
The configuration in the host should be as follows:
---
root(a)rhel86-host1.p1.lab ~ # hostname
rhel86-host1.p1.lab
root(a)rhel86-host1.p1.lab ~ # cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.11 rhel86-host1.p1.lab
172.16.1.20 rhvm.p1.lab
root(a)rhel86-host1.p1.lab ~ # ip -4 --brief address show
lo UNKNOWN 127.0.0.1/8
eth0 UP 192.168.123.139/24
eth1.2001@eth1 UP 172.16.1.11/24
---
As you can see, just the hostname definition a regular vlan interface with
the IP that the hypervisor has. The installer will then take care of
creating the 'ovirtmgmt' network, and connect both the hypervisor and the
HostedEngine VM to it, preserving vlan configuration as required.
Two comments around this:
a) I found that using vlan tagging for ovirtmgmt usually makes things more
difficult in the future, should you ever need to renumber/change the vlan.
b) I can't say the current status, but Red Hat dropped support of deploying
RHV-M (Hosted Engine) with cockpit a few releases ago. Not sure if it is
still in shape.
After checking both the download and documentation urls, it seems it was
also dropped from oVirt, no mention of a cockpit installation whatsoever:
-
https://ovirt.org/download/
-
https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_...
I've been able to find some blog that mentions the whole cockpit-ovirt
deployment process, it case it helps:
-
https://anthonyspiteri.net/ovirt-kvm-homelab-1/
-
https://anthonyspiteri.net/ovirt-kvm-homelab-2/
My suggestion would be to check the 'hosted-engine --deploy' CLI installer,
which is known to work well.
Hope this helps,
Fran
On Wed, Mar 22, 2023 at 09:13:37AM +0100, Dirk H. Schulz wrote:
> Hi Fran,
>
> thanks for helping me.
>
> I have defined a separate fqdn and ip address for the engine - sorry
> for not mentioning that. It is also in /etc/hosts.
>
>
> The problem seems to be the following:
>
> The error message is thrown by ansible if
>
> he_host_ip not in target_address_v4.stdout_lines and
> he_host_ip not in target_address_v6.stdout_lines
>
> and grepping he_host_ip from
> ovirt-hosted-engine-setup-ansible-initial_clean-XYZ.log turns out to
> be the ip address of the physical interface but target_address_v4 is
> explicitly set to "ip addr show" of the vlan interface.
>
> Since I defined the vlan interface as bridge interface in the VM
> settings for the engine this seems strange to me.
>
> What is more strange: If I put in the physical host interface as
> bridge interface the error does not occur, but then the Engine-VM is
> bound to the default libvirt bridge (which I did not want it to) and
> the setup process fails with "There was a failure deploying the
> engine on the local engine VM." which I now have to analyze.
>
> Is my idea of binding the bridge to the vlan interface to have a
> management network there completely wrong?
>
> I did not manage to find any docs on what the cockpit module expects
> there, and the ovirt setup docs are also very sparse - can you point
> me to some in depth examples of the requirements the self hosted
> engine setup expects?
>
> Cheers,
>
> Dirk
>
>
> Am 21.03.23 um 15:30 schrieb Fran Garcia:
> >The hosted engine is a regular Virtual machine.
> >
> >You need to have a separate FQDN and IP, different from those assgined
> >to the hypervisors.
> >
> >In your example, you need to assign a fqdn/IP from the vlan 4000 (or
> >wherever the mgmt vlan is), and use it whenever queried about the
> >Hosted Engine VM details.
> >
> >HTH
> >
> >Fran
> >
> >On Tue, 21 Mar 2023 at 13:07, <dirk.schulz(a)geniodata.de> wrote:
> >>Hi all,
> >>I have set up 3 servers in 3 data centers, each having one physical interface
and a vlan interface parented by it.
> >>The connection between the 3 servers over the vlan interfaces (using private
ip addresses) works (using icmp ping as the test).
> >>
> >>Now I want to turn them into an ovirt cluster creating the self hosted engine
on the first server. I have
> >>- made sure the engine fqdn is in dns forward and reverse and in /etc/hosts
> >>- made sure that both interfaces have unique dns entries which can be
resolved forward and reverse
> >>- made sure that both interfaces' fqdns are in /etc/hosts
> >>- made sure only the primary hostname (not fqdn) is in /etc/hostname,
> >>- made sure ipv6 is available on the physical interface,
> >>- made sure ipv6 method is "disabled" on the vlan interface,
> >>- set
/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/defaults/main.yml:he_force_ip4:
true to make sure no ipv6 attempts to interfere.
> >>
> >>Now when I use cockpit's hosted engine wizard (not hyperconverged), i run
into 2 opposing problems.
> >>If I set the FQDN in the "Advanced" sub pane to the FQDN of the
vlan interface, the wizards gets stuck at "preparing VM" with "The resolved
address doesn't resolve on the selected interface\n".
> >>If I set the FQDN in the "Advanced" sub pane to the FQDN of the
physical interface, I get the same result.
> >>
> >>If i add the physical interfaces FQDN to the vlan ip address in /etc/hosts, i
get "hostname 'x.y.z' doesn't uniquely match the interface
'enp5s0.4000' selected for the management bridge; it matches also interface with
IP ['physical']. Please make sure that the hostname got from the interface for the
management network resolves only there." So clearly separating the two interfaces
namewise is mandatory.
> >>
> >>I tried to follow the ansible workflow step by step to see what it does. I
seems the validate hostname is triggered twice, second time on filling in FQDN in
"Advanced" sub pane - it succeeds with both hostnames (physiscal interface and
vlan ip), but that does not prevent the "prepare VM" workflow in doing the same
verification and failing, as far as I can see. This is where it happens:
> >>2023-03-20 14:31:48,354+0100 DEBUG ansible on_any args TASK:
ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the selected
interface kwargs is_conditional:False
> >>2023-03-20 14:31:48,355+0100 DEBUG ansible on_any args localhost TASK:
ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the selected
interface kwargs
> >>2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost"
var "ansible_play_hosts" type "<class 'list'>" value:
"[]"
> >>2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost"
var "ansible_play_batch" type "<class 'list'>" value:
"[]"
> >>2023-03-20 14:31:48,481+0100 DEBUG var changed: host "localhost"
var "play_hosts" type "<class 'list'>" value:
"[]"
> >>2023-03-20 14:31:48,481+0100 ERROR ansible failed {
> >> "ansible_host": "localhost",
> >> "ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
> >> "ansible_result": {
> >> "_ansible_no_log": false,
> >> "changed": false,
> >> "msg": "The resolved address doesn't resolve on
the selected interface\n"
> >> },
> >> "ansible_task": "Check the resolved address resolves on
the selected interface",
> >> "ansible_type": "task",
> >> "status": "FAILED",
> >> "task_duration": 0
> >>}
> >>
> >>
> >>So I am really stuck there. I do not have any idea how and where to go on. I
can try changing bits in the playbooks and parameters (like using "hostname -A"
instead of "hostname -f" for the failing test), but that can't really be the
idea - I am to new to this to run into a bug or similar, I will suspect I do overlook
something.
> >>
> >>Any hint or help is appreciated.
> >>
> >>Cheers,
> >>
> >>Dirk
> >>_______________________________________________
> >>Users mailing list -- users(a)ovirt.org
> >>To unsubscribe send an email to users-leave(a)ovirt.org
> >>Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> >>oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> >>List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2DVC2Q22SI...
>