Failed to configure management network on the host. - oVirt in a KVM

Hi guys, I'm trying oVirt in a KVM and I get this: ... [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, ... How much of a worry is it and how to fix such KVM vm so oVirt would be happy? many thanks, L.

On Mon, Feb 8, 2021 at 12:18 AM lejeczek via Users <users@ovirt.org> wrote:
Hi guys,
I'm trying oVirt in a KVM and I get this: ... [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, ...
How much of a worry is it and how to fix such KVM vm so oVirt would be happy? many thanks, L. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP...
Hello, usually KVM hosts work without any issues for me, as long as there is a single interface that will act as a management network. Can you please share relevant part from supervdsm.log? Thank you. Regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

On 08/02/2021 06:44, Ales Musil wrote:
On Mon, Feb 8, 2021 at 12:18 AM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
Hi guys,
I'm trying oVirt in a KVM and I get this: ... [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, ...
How much of a worry is it and how to fix such KVM vm so oVirt would be happy? many thanks, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSPFK7E6JWA27H6SPHHO6/>
Hello,
usually KVM hosts work without any issues for me, as long as there is a single interface that will act as a management network.
Can you please share relevant part from supervdsm.log?
Thank you. Regards, Ales
Just in case it's not clear from that log snippet I pasted, should have made it clearer, it's deployment of the engine. KVM vm has only on single iface. ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate the error message from the engine events] [ INFO ] ok: [localhost] [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, [ INFO ] skipping: [localhost] [ INFO ] You can now connect to https://c8ovirt1.private.road:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.con_3ppb_he_setup_lock is removed, delete it once ready to proceed] ... and tail of supervdsm.log ... (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool -ping (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,668::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,669::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool get-lldp -i ens3 adminStatus (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool set-lldp -i ens3 adminStatus=rx (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,674::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,675::supervdsm_server::311::SuperVdsm.Server::(main) Started serving super vdsm object restore-net::INFO::2021-02-07 18:03:43,508::restore_net_config::290::root::(restore) starting network restoration. restore-net::DEBUG::2021-02-07 18:03:43,510::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) restore-net::DEBUG::2021-02-07 18:03:43,520::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,583::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::89::root::(unified_restoration) Remove networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::96::root::(unified_restoration) Setup networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::297::root::(restore) restoration completed successfully. regards, L.
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil

On Mon, Feb 8, 2021 at 12:33 PM lejeczek via Users <users@ovirt.org> wrote:
On 08/02/2021 06:44, Ales Musil wrote:
On Mon, Feb 8, 2021 at 12:18 AM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
Hi guys,
I'm trying oVirt in a KVM and I get this: ... [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, ...
How much of a worry is it and how to fix such KVM vm so oVirt would be happy? many thanks, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP...
Hello,
usually KVM hosts work without any issues for me, as long as there is a single interface that will act as a management network.
Can you please share relevant part from supervdsm.log?
Thank you. Regards, Ales
Just in case it's not clear from that log snippet I pasted, should have made it clearer, it's deployment of the engine. KVM vm has only on single iface. ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate the error message from the engine events] [ INFO ] ok: [localhost] [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, [ INFO ] skipping: [localhost] [ INFO ] You can now connect to https://c8ovirt1.private.road:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.con_3ppb_he_setup_lock is removed, delete it once ready to proceed] ...
and tail of supervdsm.log ... (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool -ping (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,668::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,669::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool get-lldp -i ens3 adminStatus (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool set-lldp -i ens3 adminStatus=rx (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,674::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,675::supervdsm_server::311::SuperVdsm.Server::(main) Started serving super vdsm object restore-net::INFO::2021-02-07 18:03:43,508::restore_net_config::290::root::(restore) starting network restoration. restore-net::DEBUG::2021-02-07 18:03:43,510::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) restore-net::DEBUG::2021-02-07 18:03:43,520::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,583::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::89::root::(unified_restoration) Remove networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::96::root::(unified_restoration) Setup networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::297::root::(restore) restoration completed successfully.
regards, L.
This looks like a log after reboot. Is there a setup networks call with the failure? Thanks. Regards, Ales
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJH7LEPTQMJVUZ...
-- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

On 08/02/2021 12:11, Ales Musil wrote:
On Mon, Feb 8, 2021 at 12:33 PM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 08/02/2021 06:44, Ales Musil wrote: > > > On Mon, Feb 8, 2021 at 12:18 AM lejeczek via Users > <users@ovirt.org <mailto:users@ovirt.org> <mailto:users@ovirt.org <mailto:users@ovirt.org>>> wrote: > > Hi guys, > > I'm trying oVirt in a KVM and I get this: > ... > [ INFO ] The host has been set in non_operational > status, > deployment errors: code 505: Host c8ovirt1.private.road > installation failed. Failed to configure management > network > on the host., code 519: Host c8ovirt1.private.road > does > not comply with the cluster road networks, the following > networks are missing on host: 'ovirtmgmt', code 1120: > Failed to configure management network on host > c8ovirt1.private.road due to setup networks failure., > code 9000: Failed to verify Power Management > configuration > for Host c8ovirt1.private.road., code 10802: VDSM > c8ovirt1.private.road command HostSetupNetworksVDS > failed: > Internal JSON-RPC error: {'reason': 'Failed to find > interface to with route table ID 254 to store route > rules'}, > ... > > How much of a worry is it and how to fix such KVM vm so > oVirt would be happy? > many thanks, L. > _______________________________________________ > Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> > <mailto:users@ovirt.org <mailto:users@ovirt.org>> > To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> > <mailto:users-leave@ovirt.org <mailto:users-leave@ovirt.org>> > Privacy Statement: > https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> > <https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html>> > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> > <https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/>> > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSPFK7E6JWA27H6SPHHO6/> > <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSP... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/UZHD5KCPMKXOSPFK7E6JWA27H6SPHHO6/>> > > > > Hello, > > usually KVM hosts work without any issues for me, as long > as there is a single interface that will act as a > management network. > > Can you please share relevant part from supervdsm.log? > > Thank you. > Regards, > Ales > > Just in case it's not clear from that log snippet I pasted, should have made it clearer, it's deployment of the engine. KVM vm has only on single iface. ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate the error message from the engine events] [ INFO ] ok: [localhost] [ INFO ] The host has been set in non_operational status, deployment errors: code 505: Host c8ovirt1.private.road installation failed. Failed to configure management network on the host., code 519: Host c8ovirt1.private.road does not comply with the cluster road networks, the following networks are missing on host: 'ovirtmgmt', code 1120: Failed to configure management network on host c8ovirt1.private.road due to setup networks failure., code 9000: Failed to verify Power Management configuration for Host c8ovirt1.private.road., code 10802: VDSM c8ovirt1.private.road command HostSetupNetworksVDS failed: Internal JSON-RPC error: {'reason': 'Failed to find interface to with route table ID 254 to store route rules'}, [ INFO ] skipping: [localhost] [ INFO ] You can now connect to https://c8ovirt1.private.road:6900/ovirt-engine/ <https://c8ovirt1.private.road:6900/ovirt-engine/> and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.con_3ppb_he_setup_lock is removed, delete it once ready to proceed] ...
and tail of supervdsm.log ... (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,618::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool -ping (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,668::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,669::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool get-lldp -i ens3 adminStatus (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,672::cmdutils::130::root::(exec_cmd) /usr/sbin/lldptool set-lldp -i ens3 adminStatus=rx (cwd None) MainThread::DEBUG::2021-02-07 18:03:40,674::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainThread::DEBUG::2021-02-07 18:03:40,675::supervdsm_server::311::SuperVdsm.Server::(main)
Started serving super vdsm object restore-net::INFO::2021-02-07 18:03:43,508::restore_net_config::290::root::(restore) starting network restoration. restore-net::DEBUG::2021-02-07 18:03:43,510::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) restore-net::DEBUG::2021-02-07 18:03:43,520::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,579::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,582::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,583::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started restore-net::DEBUG::2021-02-07 18:03:43,600::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished restore-net::DEBUG::2021-02-07 18:03:43,602::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::89::root::(unified_restoration)
Remove networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::96::root::(unified_restoration)
Setup networks ({}) and bonds ({}). restore-net::INFO::2021-02-07 18:03:43,610::restore_net_config::297::root::(restore) restoration completed successfully.
regards, L.
This looks like a log after reboot. Is there a setup networks call with the failure?
Honestly I do not know how to answer this as it my very first try to test-deploy oVirt. I can log in to the web to which deployment process points to, as above, and there in "Administration Portal" I see that host is down with: " Host has disabled power management. There are network configuration changes that haven't been saved yet. Host has no default route. " That last line must relate I guess, to "Failed to find interface to with route table ID 254 to store route rules" But like I said - the host(KVM vm, everything is on Centos Stream) has one iface and of course there is default routing on it. And another snippet of a tailf of the log, not after reboot but from around the time deployment process got to that stage: ..... MainProcess|jsonrpc/5::DEBUG::2021-02-08 10:07:48,630::ifaces::668::root::(_remove_unknown_type_interfaces) Interface lo is type unknown and will be ignored during the activation MainProcess|jsonrpc/5::DEBUG::2021-02-08 10:07:48,630::ifaces::668::root::(_remove_unknown_type_interfaces) Interface virbr0-nic is type unknown and will be ignored during the activation MainProcess|jsonrpc/5::DEBUG::2021-02-08 10:07:48,630::ifaces::668::root::(_remove_unknown_type_interfaces) Interface vnet0 is type unknown and will be ignored during the activation MainProcess|jsonrpc/5::ERROR::2021-02-08 10:07:48,632::supervdsm_server::98::SuperVdsm.ServerCallback::(wrapper) Error in setupNetworks Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 96, in wrapper res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 206, in setupNetworks _setup_networks(networks, bondings, options) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 230, in _setup_networks netswitch.configurator.setup(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 71, in setup _setup_nmstate(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 94, in _setup_nmstate nmstate.setup(desired_state, verify_change=not in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 50, in setup state_apply(desired_state, verify_change=verify_change) File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 69, in apply net_state = NetState(desired_state, current_state, save_to_disk) File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 64, in __init__ self._ifaces.gen_route_rule_metadata(self._route_rule, self._route) File "/usr/lib/python3.6/site-packages/libnmstate/ifaces/ifaces.py", line 617, in gen_route_rule_metadata route_state, self._kernel_ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 197, in gen_metadata route_state, route_table, ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 233, in _iface_for_route_table "Failed to find interface to with route table ID " libnmstate.error.NmstateValueError: Failed to find interface to with route table ID 254 to store route rules MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,533::supervdsm_server::94::SuperVdsm.ServerCallback::(wrapper) call network_caps with () {} MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,535::cmdutils::130::root::(exec_cmd) /sbin/tc qdisc show (cwd None) MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,538::cmdutils::138::root::(exec_cmd) SUCCESS: <err> = b''; <rc> = 0 MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,602::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,602::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,602::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,604::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,604::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,604::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,617::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,617::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,617::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,620::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,620::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,620::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,626::nic::40::root::(speed) cannot read ens3 speed MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,627::supervdsm_server::101::SuperVdsm.ServerCallback::(wrapper) return network_caps with {'networks': {}, 'bondings': {}, 'bridges': {'virbr0': {'ports': ['virbr0-nic', 'vnet0'], 'stp': 'on', 'addr': '192.168.222.1', 'ipv4addrs': ['192.168.222.1/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '5082', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.525400aa16f9', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '194', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '1', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.525400aa16f9', 'forward_delay': '200', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}}, 'nics': {'ens3': {'hwaddr': '52:54:00:c1:ef:57', 'addr': '10.3.1.111', 'ipv4addrs': ['10.3.1.111/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '10.3.1.254', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': True, 'dhcpv4': False, 'dhcpv6': False, 'speed': 0}}, 'vlans': {}, 'nameservers': ['10.3.1.254'], 'supportsIPv6': True} MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,632::supervdsm_server::94::SuperVdsm.ServerCallback::(wrapper) call get_cpu_vulnerabilities with () {} MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,633::supervdsm_server::101::SuperVdsm.ServerCallback::(wrapper) return get_cpu_vulnerabilities with {'SPECTRE_V2': '(Mitigation: Full AMD retpoline, IBPB: conditional, STIBP: disabled, RSB filling)', 'ITLB_MULTIHIT': '(Not affected)', 'MDS': '(Not affected)', 'L1TF': '(Not affected)', 'SPEC_STORE_BYPASS': '(Mitigation: Speculative Store Bypass disabled via prctl and seccomp)', 'TSX_ASYNC_ABORT': '(Not affected)', 'SPECTRE_V1': '(Mitigation: usercopy/swapgs barriers and __user pointer sanitization)', 'SRBDS': '(Not affected)', 'MELTDOWN': '(Not affected)'} MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,633::supervdsm_server::94::SuperVdsm.ServerCallback::(wrapper) call check_qemu_conf_contains with ('vnc_tls', '1') {} MainProcess|jsonrpc/4::DEBUG::2021-02-08 10:07:52,633::supervdsm_server::101::SuperVdsm.ServerCallback::(wrapper) return check_qemu_conf_contains with True MainProcess|jsonrpc/6::DEBUG::2021-02-08 10:07:53,294::supervdsm_server::94::SuperVdsm.ServerCallback::(wrapper) call getHardwareInfo with () {} ....
Thanks. Regards, Ales
> -- > > Ales Musil > > Software Engineer - RHV Network > > Red Hat EMEA <https://www.redhat.com <https://www.redhat.com>> > > amusil@redhat.com <mailto:amusil@redhat.com> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>> IM: amusil > > <https://red.ht/sig <https://red.ht/sig>> > _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJH7LEPTQMJVUZ... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/KJH7LEPTQMJVUZQWC45HQEUGVIV2Y2ZO/>
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil

Did you disable NetworkManager.service ? Best Regards,Strahil Nikolov

Did you disable NetworkManager.service ?
Best Regards, Strahil Nikolov I do not guess that's possible. On Centos Steam, sevice NetworkManager even if disabled is "pulled up" by vdsm itself. Removing NetworkManager rpm package is also not
On 09/02/2021 19:02, Strahil Nikolov wrote: possible as oVirt or its dependencies require it. Unless this is a bug, it seems that 4.4.5-master on Centos Stream works that way, differently. regards, L.

On Tue, Feb 9, 2021 at 10:04 PM lejeczek via Users <users@ovirt.org> wrote:
Did you disable NetworkManager.service ?
Best Regards, Strahil Nikolov I do not guess that's possible. On Centos Steam, sevice NetworkManager even if disabled is "pulled up" by vdsm itself. Removing NetworkManager rpm package is also not
On 09/02/2021 19:02, Strahil Nikolov wrote: possible as oVirt or its dependencies require it. Unless this is a bug, it seems that 4.4.5-master on Centos Stream works that way, differently.
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX5YMMBVK7WD2R...
Disabling NetworkManager would break all networking on the host because from 4.4 this is the main provider for networking. Can you please also provide a line from supoervdsm.log that starts with "Desired state:"? CentOS stream on it's own had some issues and I am not sure if anyone tried to successfully deploy the host there. Regards, Ales -- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

On 10/02/2021 06:54, Ales Musil wrote:
On Tue, Feb 9, 2021 at 10:04 PM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 09/02/2021 19:02, Strahil Nikolov wrote: > Did you disable NetworkManager.service ? > > Best Regards, > Strahil Nikolov I do not guess that's possible. On Centos Steam, sevice NetworkManager even if disabled is "pulled up" by vdsm itself. Removing NetworkManager rpm package is also not possible as oVirt or its dependencies require it. Unless this is a bug, it seems that 4.4.5-master on Centos Stream works that way, differently.
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX5YMMBVK7WD2R... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX5YMMBVK7WD2ROGSK2DMZ6MM5734NYB/>
Disabling NetworkManager would break all networking on the host because from 4.4 this is the main provider for networking.
Can you please also provide a line from supoervdsm.log that starts with "Desired state:"?
-> $ grep esired.stat -B8 -A8 /var/log/vdsm/supervdsm.log MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,520::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b'' MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,520::configurator::90::root::(_setup_nmstate) Processing setup through nmstate MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,537::configurator::92::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'ens3', 'state': 'up', 'mtu': 1500, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'name': 'ovirtmgmt', 'type': 'linux-bridge', 'state': 'up', 'mtu': 1500, 'bridge': {'port': [{'name': 'ens3'}], 'options': {'stp': {'enabled': False}}}, 'ipv4': {'enabled': True, 'address': [{'ip': '10.3.1.111', 'prefix-length': 24}], 'dhcp': False}, 'ipv6': {'enabled': False}}], 'routes': {'config': [{'next-hop-address': '10.3.1.254', 'next-hop-interface': 'ovirtmgmt', 'destination': '0.0.0.0/0', 'table-id': 0}]}, 'dns-resolver': {'config': {'server': ['10.3.1.254']}}} MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,585::ifaces::668::root::(_remove_unknown_type_interfaces) Interface lo is type unknown and will be ignored during the activation MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,585::ifaces::668::root::(_remove_unknown_type_interfaces) Interface virbr0-nic is type unknown and will be ignored during the activation -- res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 206, in setupNetworks _setup_networks(networks, bondings, options) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 230, in _setup_networks netswitch.configurator.setup(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 71, in setup _setup_nmstate(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 94, in _setup_nmstate nmstate.setup(desired_state, verify_change=not in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 50, in setup state_apply(desired_state, verify_change=verify_change) File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 69, in apply net_state = NetState(desired_state, current_state, save_to_disk) File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 64, in __init__ self._ifaces.gen_route_rule_metadata(self._route_rule, self._route) File "/usr/lib/python3.6/site-packages/libnmstate/ifaces/ifaces.py", line 617, in gen_route_rule_metadata route_state, self._kernel_ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 197, in gen_metadata route_state, route_table, ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 233, in _iface_for_route_table "Failed to find interface to with route table ID "
CentOS stream on it's own had some issues and I am not sure if anyone tried to successfully deploy the host there.
I do not get it - I hope you don't mind my saying - since Redhat's announcement to all of us a while ago, about now Centos Stream being the only way into the future, I hear that... well, not really.... Not true? Which one is it?
Regards, Ales --
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil

On Wed, Feb 10, 2021 at 10:15 AM lejeczek via Users <users@ovirt.org> wrote:
On 10/02/2021 06:54, Ales Musil wrote:
On Tue, Feb 9, 2021 at 10:04 PM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 09/02/2021 19:02, Strahil Nikolov wrote: > Did you disable NetworkManager.service ? > > Best Regards, > Strahil Nikolov I do not guess that's possible. On Centos Steam, sevice NetworkManager even if disabled is "pulled up" by vdsm itself. Removing NetworkManager rpm package is also not possible as oVirt or its dependencies require it. Unless this is a bug, it seems that 4.4.5-master on Centos Stream works that way, differently.
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX5YMMBVK7WD2R...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EX5YMMBVK7WD2R...
Disabling NetworkManager would break all networking on the host because from 4.4 this is the main provider for networking.
Can you please also provide a line from supoervdsm.log that starts with "Desired state:"?
-> $ grep esired.stat -B8 -A8 /var/log/vdsm/supervdsm.log MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,520::hooks::122::root::(_runHooksDir) /usr/libexec/vdsm/hooks/before_network_setup/50_fcoe: rc=0 err=b'' MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,520::configurator::90::root::(_setup_nmstate) Processing setup through nmstate MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,528::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,530::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/3::INFO::2021-02-09 15:04:19,537::configurator::92::root::(_setup_nmstate) Desired state: {'interfaces': [{'name': 'ens3', 'state': 'up', 'mtu': 1500, 'ipv4': {'enabled': False}, 'ipv6': {'enabled': False}}, {'name': 'ovirtmgmt', 'type': 'linux-bridge', 'state': 'up', 'mtu': 1500, 'bridge': {'port': [{'name': 'ens3'}], 'options': {'stp': {'enabled': False}}}, 'ipv4': {'enabled': True, 'address': [{'ip': '10.3.1.111', 'prefix-length': 24}], 'dhcp': False}, 'ipv6': {'enabled': False}}], 'routes': {'config': [{'next-hop-address': '10.3.1.254', 'next-hop-interface': 'ovirtmgmt', 'destination': '0.0.0.0/0', 'table-id': 0}]}, 'dns-resolver': {'config': {'server': ['10.3.1.254']}}} MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: ethernet ens3 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: bridge virbr0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,559::context::144::root::(register_async) Async action: Retrieve applied config: tun vnet0 started MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: ethernet ens3 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: bridge virbr0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,562::context::153::root::(finish_async) Async action: Retrieve applied config: tun vnet0 finished MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,585::ifaces::668::root::(_remove_unknown_type_interfaces) Interface lo is type unknown and will be ignored during the activation MainProcess|jsonrpc/3::DEBUG::2021-02-09 15:04:19,585::ifaces::668::root::(_remove_unknown_type_interfaces) Interface virbr0-nic is type unknown and will be ignored during the activation -- res = func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 206, in setupNetworks _setup_networks(networks, bondings, options) File "/usr/lib/python3.6/site-packages/vdsm/network/api.py", line 230, in _setup_networks netswitch.configurator.setup(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 71, in setup _setup_nmstate(networks, bondings, options, in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/netswitch/configurator.py", line 94, in _setup_nmstate nmstate.setup(desired_state, verify_change=not in_rollback) File "/usr/lib/python3.6/site-packages/vdsm/network/nmstate/api.py", line 50, in setup state_apply(desired_state, verify_change=verify_change) File "/usr/lib/python3.6/site-packages/libnmstate/netapplier.py", line 69, in apply net_state = NetState(desired_state, current_state, save_to_disk) File "/usr/lib/python3.6/site-packages/libnmstate/net_state.py", line 64, in __init__ self._ifaces.gen_route_rule_metadata(self._route_rule, self._route) File "/usr/lib/python3.6/site-packages/libnmstate/ifaces/ifaces.py", line 617, in gen_route_rule_metadata route_state, self._kernel_ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 197, in gen_metadata route_state, route_table, ifaces File "/usr/lib/python3.6/site-packages/libnmstate/route_rule.py", line 233, in _iface_for_route_table "Failed to find interface to with route table ID "
It seems like it might be a bug in nmstate would you be willing to try: dnf -y copr enable packit/nmstate-nmstate-1508 dnf -y update nmstate reboot the host and then redeploy?
CentOS stream on it's own had some issues and I am not sure if anyone tried to successfully deploy the host there.
I do not get it - I hope you don't mind my saying - since Redhat's announcement to all of us a while ago, about now Centos Stream being the only way into the future, I hear that... well, not really.... Not true? Which one is it?
Regards, Ales --
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VKODYI25VXM6RD...
-- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

On 10/02/2021 09:36, Ales Musil wrote:
It seems like it might be a bug in nmstate would you be willing to try: dnf -y copr enable packit/nmstate-nmstate-1508 dnf -y update nmstate
reboot the host and then redeploy? I think it is looking better. There were some other updates dnf pulled in. So, short & incomplete list of rpms:
cockpit-ovirt-dashboard-0.14.20-0.0.master.20210207161806.git6ee9bc3.el8.noarch ovirt-ansible-collection-1.3.1-0.1.master.20210128112414.el8.noarch ovirt-engine-appliance-4.4-20210208183902.1.el8.x86_64 ovirt-host-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64 ovirt-host-dependencies-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64 ovirt-hosted-engine-ha-2.4.7-0.0.master.20210203134854.20210203134846.git7d297c2.el8.noarch ovirt-hosted-engine-setup-2.5.0-0.0.master.20201216174101.git2a94b06.el8.noarch ovirt-imageio-client-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-common-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-daemon-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch ovirt-release-master-4.4.5-0.0.master.20210208011114.git6d17342.el8.noarch ovirt-vmconsole-1.0.9-1.20201130191550.git0bf874a.el8.noarch ovirt-vmconsole-host-1.0.9-1.20201130191550.git0bf874a.el8.noarch python3-ovirt-engine-sdk4-4.4.10-1.20210208.gita10779e.el8.x86_64 python3-ovirt-setup-lib-1.3.3-0.0.master.20200727063144.git90cd6d9.el8.noarch -> $ rpm -qa \*nmst\* | sort nmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch nmstate-plugin-ovsdb-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch python3-libnmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch I hit now what's below. It might not relate to the original issue but just in case and so I can report back that "deployment" was 100% successful. Here: ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Verify permission settings on the specified storage path.]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Permission settings on the specified path do not allow access to the storage.\nVerify permission settings on the specified storage path.]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: glusterfs Please specify the full shared storage connection path to use (example: host:/path): 10.1.1.201:/oVirt If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: [ INFO ] Creating Storage Domain [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] ... And it loops around that stage. I have no clue what it is saying. I can say though that I can mount & use that same glusterfs vol from the deploying host with no problems. regards, L.

On Wed, Feb 10, 2021 at 2:56 PM lejeczek via Users <users@ovirt.org> wrote:
On 10/02/2021 09:36, Ales Musil wrote:
It seems like it might be a bug in nmstate would you be willing to try: dnf -y copr enable packit/nmstate-nmstate-1508 dnf -y update nmstate
reboot the host and then redeploy? I think it is looking better. There were some other updates dnf pulled in. So, short & incomplete list of rpms:
cockpit-ovirt-dashboard-0.14.20-0.0.master.20210207161806.git6ee9bc3.el8.noarch ovirt-ansible-collection-1.3.1-0.1.master.20210128112414.el8.noarch ovirt-engine-appliance-4.4-20210208183902.1.el8.x86_64 ovirt-host-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64 ovirt-host-dependencies-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64
ovirt-hosted-engine-ha-2.4.7-0.0.master.20210203134854.20210203134846.git7d297c2.el8.noarch
ovirt-hosted-engine-setup-2.5.0-0.0.master.20201216174101.git2a94b06.el8.noarch ovirt-imageio-client-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-common-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-daemon-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch ovirt-release-master-4.4.5-0.0.master.20210208011114.git6d17342.el8.noarch ovirt-vmconsole-1.0.9-1.20201130191550.git0bf874a.el8.noarch ovirt-vmconsole-host-1.0.9-1.20201130191550.git0bf874a.el8.noarch python3-ovirt-engine-sdk4-4.4.10-1.20210208.gita10779e.el8.x86_64
python3-ovirt-setup-lib-1.3.3-0.0.master.20200727063144.git90cd6d9.el8.noarch
-> $ rpm -qa \*nmst\* | sort nmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
nmstate-plugin-ovsdb-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
python3-libnmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
I hit now what's below. It might not relate to the original issue but just in case and so I can report back that "deployment" was 100% successful. Here: ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Verify permission settings on the specified storage path.]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Permission settings on the specified path do not allow access to the storage.\nVerify permission settings on the specified storage path.]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: glusterfs Please specify the full shared storage connection path to use (example: host:/path): 10.1.1.201:/oVirt If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: [ INFO ] Creating Storage Domain [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] ...
And it loops around that stage. I have no clue what it is saying. I can say though that I can mount & use that same glusterfs vol from the deploying host with no problems.
I am not familiar enough with storage, @Nir Soffer <nsoffer@redhat.com> can you please help to identify what might be wrong here?
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHQHVRVHIU4A5E...
-- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

On 10/02/2021 14:00, Ales Musil wrote:
On Wed, Feb 10, 2021 at 2:56 PM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 10/02/2021 09:36, Ales Musil wrote: > It seems like it might be a bug in nmstate would you be > willing to try: > dnf -y copr enable packit/nmstate-nmstate-1508 > dnf -y update nmstate > > reboot the host and then redeploy? I think it is looking better. There were some other updates dnf pulled in. So, short & incomplete list of rpms:
cockpit-ovirt-dashboard-0.14.20-0.0.master.20210207161806.git6ee9bc3.el8.noarch ovirt-ansible-collection-1.3.1-0.1.master.20210128112414.el8.noarch ovirt-engine-appliance-4.4-20210208183902.1.el8.x86_64 ovirt-host-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64 ovirt-host-dependencies-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64 ovirt-hosted-engine-ha-2.4.7-0.0.master.20210203134854.20210203134846.git7d297c2.el8.noarch ovirt-hosted-engine-setup-2.5.0-0.0.master.20201216174101.git2a94b06.el8.noarch ovirt-imageio-client-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-common-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-daemon-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch ovirt-release-master-4.4.5-0.0.master.20210208011114.git6d17342.el8.noarch ovirt-vmconsole-1.0.9-1.20201130191550.git0bf874a.el8.noarch ovirt-vmconsole-host-1.0.9-1.20201130191550.git0bf874a.el8.noarch python3-ovirt-engine-sdk4-4.4.10-1.20210208.gita10779e.el8.x86_64 python3-ovirt-setup-lib-1.3.3-0.0.master.20200727063144.git90cd6d9.el8.noarch
-> $ rpm -qa \*nmst\* | sort nmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch nmstate-plugin-ovsdb-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch python3-libnmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
I hit now what's below. It might not relate to the original issue but just in case and so I can report back that "deployment" was 100% successful. Here: ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Verify permission settings on the specified storage path.]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Permission settings on the specified path do not allow access to the storage.\nVerify permission settings on the specified storage path.]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: glusterfs Please specify the full shared storage connection path to use (example: host:/path): 10.1.1.201:/oVirt If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: [ INFO ] Creating Storage Domain [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] ...
And it loops around that stage. I have no clue what it is saying. I can say though that I can mount & use that same glusterfs vol from the deploying host with no problems.
I am not familiar enough with storage, @Nir Soffer <mailto:nsoffer@redhat.com> can you please help to identify what might be wrong here?
Never mind that last error I mention. That is only me, as I'm still in a process of thumbing though, miss bits on "glusterfs" from installation docs. I can report that with new "nmstat" from copr you suggested (and perhaps the updates dnf pulled in today) the deployment process if free the original error and ends with: [ INFO ] Hosted Engine successfully deployed ps. I suppose that copr's nmstate should get its way into "regular" repo(s) asap. many thanks, L.
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHQHVRVHIU4A5E... <https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHQHVRVHIU4A5ECM7TJBALLBJSFCNN3Z/>
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil

On Wed, Feb 10, 2021 at 3:46 PM lejeczek via Users <users@ovirt.org> wrote:
On 10/02/2021 14:00, Ales Musil wrote:
On Wed, Feb 10, 2021 at 2:56 PM lejeczek via Users <users@ovirt.org <mailto:users@ovirt.org>> wrote:
On 10/02/2021 09:36, Ales Musil wrote: > It seems like it might be a bug in nmstate would you be > willing to try: > dnf -y copr enable packit/nmstate-nmstate-1508 > dnf -y update nmstate > > reboot the host and then redeploy? I think it is looking better. There were some other updates dnf pulled in. So, short & incomplete list of rpms:
cockpit-ovirt-dashboard-0.14.20-0.0.master.20210207161806.git6ee9bc3.el8.noarch
ovirt-ansible-collection-1.3.1-0.1.master.20210128112414.el8.noarch ovirt-engine-appliance-4.4-20210208183902.1.el8.x86_64 ovirt-host-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64
ovirt-host-dependencies-4.4.6-0.0.20210127122119.gitef84c5a.el8.x86_64
ovirt-hosted-engine-ha-2.4.7-0.0.master.20210203134854.20210203134846.git7d297c2.el8.noarch
ovirt-hosted-engine-setup-2.5.0-0.0.master.20201216174101.git2a94b06.el8.noarch
ovirt-imageio-client-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-common-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-imageio-daemon-2.2.0-0.202102041750.git98b0a36.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch
ovirt-provider-ovn-driver-1.2.34-0.20201207083749.git75016ed.el8.noarch
ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch
ovirt-release-master-4.4.5-0.0.master.20210208011114.git6d17342.el8.noarch
ovirt-vmconsole-1.0.9-1.20201130191550.git0bf874a.el8.noarch ovirt-vmconsole-host-1.0.9-1.20201130191550.git0bf874a.el8.noarch python3-ovirt-engine-sdk4-4.4.10-1.20210208.gita10779e.el8.x86_64
python3-ovirt-setup-lib-1.3.3-0.0.master.20200727063144.git90cd6d9.el8.noarch
-> $ rpm -qa \*nmst\* | sort nmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
nmstate-plugin-ovsdb-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
python3-libnmstate-1.0.2-0.20210208061141616978.pr1508.13.gfcaaacf.el8.noarch
I hit now what's below. It might not relate to the original issue but just in case and so I can report back that "deployment" was 100% successful. Here: ... [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add NFS storage domain] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Add glusterfs storage domain] [ ERROR ] Verify permission settings on the specified storage path.]". HTTP response code is 400. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Permission settings on the specified path do not allow access to the storage.\nVerify permission settings on the specified storage path.]\". HTTP response code is 400."} Please specify the storage you would like to use (glusterfs, iscsi, fc, nfs)[nfs]: glusterfs Please specify the full shared storage connection path to use (example: host:/path): 10.1.1.201:/oVirt If needed, specify additional mount options for the connection to the hosted-engine storagedomain (example: rsize=32768,wsize=32768) []: [ INFO ] Creating Storage Domain [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] ...
And it loops around that stage. I have no clue what it is saying. I can say though that I can mount & use that same glusterfs vol from the deploying host with no problems.
I am not familiar enough with storage, @Nir Soffer <mailto:nsoffer@redhat.com> can you please help to identify what might be wrong here?
Never mind that last error I mention. That is only me, as I'm still in a process of thumbing though, miss bits on "glusterfs" from installation docs. I can report that with new "nmstat" from copr you suggested (and perhaps the updates dnf pulled in today) the deployment process if free the original error and ends with: [ INFO ] Hosted Engine successfully deployed
ps. I suppose that copr's nmstate should get its way into "regular" repo(s) asap. many thanks, L.
Thank you for reporting back, yes this fix, once finished, should be quickly in Stream. Regards, Ales
regards, L. _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/privacy-policy.html <https://www.ovirt.org/privacy-policy.html> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ <https://www.ovirt.org/community/about/community-guidelines/> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHQHVRVHIU4A5E...
<
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XHQHVRVHIU4A5E...
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil@redhat.com <mailto:amusil@redhat.com> IM: amusil
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/73P2QF2EK4CQ2G...
-- Ales Musil Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

What is the output of 'gluster volume info <VOLUME_NAME>' ? Also, keep in mind that if you didn't use Cockpit to deploy the gluster, you will need to apply the 'virt' group of options on that volume. P.S.: I hope you know that the engine volume should host only the HostedEngine VM. Best Regards,Strahil Nikolov
participants (3)
-
Ales Musil
-
lejeczek
-
Strahil Nikolov