ovirt-node-ng state "Bond status: NONE"

Hello, I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed by an ovirt-engine version 4.4.10. My cluster is composed of other ovirt-node-ng which have been successively updated from version 4.4.4 to version 4.4.10 without any problem. This new node is integrated normally in the cluster, however when I look at the status of the network part in the tab "Network interface" I see that all interfaces are "down". I have a paperclip at the "bond0" interface that says: "Bond state; NONE" I compared the content of "/etc/sysconfig/network-script" between an hypervisor which works and the one which has the problem and I notice that a whole bunch of files are missing and in particular the "ifup/ifdown...." files. The folder contains only the cluster specific files + the "ovirtmgmt" interface. The hypervisor which has the problem seems to be perfectly functional, ovirt-engine does not raise any problem. Have you already encountered this type of problem? Cheers, Renaud

On Tue, Mar 15, 2022 at 5:19 PM Renaud RAKOTOMALALA < renaud.rakotomalala@smile.fr> wrote:
Hello,
Hi,
I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed by an ovirt-engine version 4.4.10.
My cluster is composed of other ovirt-node-ng which have been successively updated from version 4.4.4 to version 4.4.10 without any problem.
This new node is integrated normally in the cluster, however when I look at the status of the network part in the tab "Network interface" I see that all interfaces are "down".
Did you try to call "Refresh Capabilities"? It might be the case that the engine presents a different state that is on the host after upgrade.
I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
I compared the content of "/etc/sysconfig/network-script" between an hypervisor which works and the one which has the problem and I notice that a whole bunch of files are missing and in particular the "ifup/ifdown...." files. The folder contains only the cluster specific files + the "ovirtmgmt" interface.
Since 4.4 in general we don't use initscripts anymore, so those files are really not a good indicator of anything. We are using nmstate + NetworkManager, if the connection are correctly presented here everything should be fine.
The hypervisor which has the problem seems to be perfectly functional, ovirt-engine does not raise any problem.
This really sounds like something that a simple call to "Refresh Capabilities" could fix.
Have you already encountered this type of problem?
Cheers, Renaud _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XGOKO22HWF6OML...
Best regards, Ales. -- Ales Musil Senior Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>

Hi Ales, Le mer. 16 mars 2022 à 07:11, Ales Musil <amusil@redhat.com> a écrit :
[../..]
I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed by an ovirt-engine version 4.4.10.
My cluster is composed of other ovirt-node-ng which have been successively updated from version 4.4.4 to version 4.4.10 without any problem.
This new node is integrated normally in the cluster, however when I look at the status of the network part in the tab "Network interface" I see that all interfaces are "down".
Did you try to call "Refresh Capabilities"? It might be the case that the engine presents a different state that is on the host after upgrade. I tried but and I show the pull in the vdsm.log on my faulty node. But the bond/interfaces states still "down". I tried a fresh install several time the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the issue still there.
I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
I compared the content of "/etc/sysconfig/network-script" between an hypervisor which works and the one which has the problem and I notice that a whole bunch of files are missing and in particular the "ifup/ifdown...." files. The folder contains only the cluster specific files + the "ovirtmgmt" interface.
Since 4.4 in general we don't use initscripts anymore, so those files are really not a good indicator of anything. We are using nmstate + NetworkManager, if the connection are correctly presented here everything should be fine.
networkManager show interfaces and bond up and running from the node perspective nmcli con show --active NAME UUID TYPE DEVICE ovirtmgmt 6b08c819-6091-44de-9546-X bridge ovirtmgmt virbr0 91cb9d5c-b64d-4655-ac2a-X bridge virbr0 bond0 ad33d8b0-1f7b-cab9-9447-X bond bond0 eno1 abf4c85b-57cc-4484-4fa9-X ethernet eno1 eno2 b186f945-cc80-911d-668c-X ethernet eno2 nmstatectl show return correct states - name: bond0 type: bond state: up accept-all-mac-addresses: false ethtool: feature: [../..] ipv4: enabled: false address: [] dhcp: false ipv6: enabled: false address: [] autoconf: false dhcp: false link-aggregation: mode: active-backup options: all_slaves_active: dropped arp_all_targets: any arp_interval: 0 arp_validate: none downdelay: 0 fail_over_mac: none miimon: 100 num_grat_arp: 1 num_unsol_na: 1 primary: eno1 primary_reselect: always resend_igmp: 1 updelay: 0 use_carrier: true port: - eno1 - eno2 lldp: enabled: false mac-address: X mtu: 1500 The state for eno1 and eno2 is "up".
The hypervisor which has the problem seems to be perfectly functional, ovirt-engine does not raise any problem.
This really sounds like something that a simple call to "Refresh Capabilities" could fix.
I did it several times. Everything is fetched (I checked in the logs), but the states are still down for all interfaces... If I do a fresh install in 4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI Hosts/<node>/Network Interfaces is KO.

On Thu, Mar 17, 2022 at 11:43 AM Renaud RAKOTOMALALA < renaud.rakotomalala@alterway.fr> wrote:
Hi Ales,
Le mer. 16 mars 2022 à 07:11, Ales Musil <amusil@redhat.com> a écrit :
[../..]
I am trying to add a new ovirt-node-ng 4.4.10 node to my cluster managed by an ovirt-engine version 4.4.10.
My cluster is composed of other ovirt-node-ng which have been successively updated from version 4.4.4 to version 4.4.10 without any problem.
This new node is integrated normally in the cluster, however when I look at the status of the network part in the tab "Network interface" I see that all interfaces are "down".
Did you try to call "Refresh Capabilities"? It might be the case that the engine presents a different state that is on the host after upgrade.
I tried but and I show the pull in the vdsm.log on my faulty node. But the bond/interfaces states still "down". I tried a fresh install several time the node with "ovirt-node-ng-installer-4.4.10-2022030308.el8.iso" but the issue still there.
I have a paperclip at the "bond0" interface that says: "Bond state; NONE"
I compared the content of "/etc/sysconfig/network-script" between an hypervisor which works and the one which has the problem and I notice that a whole bunch of files are missing and in particular the "ifup/ifdown...." files. The folder contains only the cluster specific files + the "ovirtmgmt" interface.
Since 4.4 in general we don't use initscripts anymore, so those files are really not a good indicator of anything. We are using nmstate + NetworkManager, if the connection are correctly presented here everything should be fine.
networkManager show interfaces and bond up and running from the node perspective
nmcli con show --active NAME UUID TYPE DEVICE ovirtmgmt 6b08c819-6091-44de-9546-X bridge ovirtmgmt virbr0 91cb9d5c-b64d-4655-ac2a-X bridge virbr0 bond0 ad33d8b0-1f7b-cab9-9447-X bond bond0 eno1 abf4c85b-57cc-4484-4fa9-X ethernet eno1 eno2 b186f945-cc80-911d-668c-X ethernet eno2
nmstatectl show return correct states
- name: bond0 type: bond state: up accept-all-mac-addresses: false ethtool: feature: [../..] ipv4: enabled: false address: [] dhcp: false ipv6: enabled: false address: [] autoconf: false dhcp: false link-aggregation: mode: active-backup options: all_slaves_active: dropped arp_all_targets: any arp_interval: 0 arp_validate: none downdelay: 0 fail_over_mac: none miimon: 100 num_grat_arp: 1 num_unsol_na: 1 primary: eno1 primary_reselect: always resend_igmp: 1 updelay: 0 use_carrier: true port: - eno1 - eno2 lldp: enabled: false mac-address: X mtu: 1500
The state for eno1 and eno2 is "up".
The hypervisor which has the problem seems to be perfectly functional, ovirt-engine does not raise any problem.
This really sounds like something that a simple call to "Refresh Capabilities" could fix.
I did it several times. Everything is fetched (I checked in the logs), but the states are still down for all interfaces... If I do a fresh install in 4.4.4, the states shown by rhevm are OK, if I reinstall in 4.4.10 the WebUI Hosts/<node>/Network Interfaces is KO.
That's really strange, I would suggest removing the host completely from the engine if possible and then adding it again. That should also remove the host from DB and clear the references. Is it only one host that's affected or multiple? Best regards, Ales -- Ales Musil Senior Software Engineer - RHV Network Red Hat EMEA <https://www.redhat.com> amusil@redhat.com IM: amusil <https://red.ht/sig>
participants (3)
-
Ales Musil
-
Renaud RAKOTOMALALA
-
Renaud RAKOTOMALALA