Hi Jiri,
you are the second person who mentions it. Can you open a bug at
bugzilla.redhat.com
about that ?
Best Regards,
Strahil Nikolov
На 24 юли 2020 г. 16:30:02 GMT+03:00, "Jiří Sléžka" <jiri.slezka(a)slu.cz>
написа:
>On 7/24/20 11:36 AM, Jiří Sléžka wrote:
>> On 7/24/20 10:56 AM, Ales Musil wrote:
>>>
>>>
>>> On Fri, Jul 24, 2020 at 10:40 AM Jiří Sléžka <jiri.slezka(a)slu.cz
>>> <mailto:jiri.slezka@slu.cz>> wrote:
>>>
>>> On 7/23/20 2:07 PM, Jiří Sléžka wrote:
>>> > On 7/23/20 12:35 PM, Ales Musil wrote:
>>> >>
>>> >>
>>> >> On Thu, Jul 23, 2020 at 11:50 AM Jiří Sléžka
><jiri.slezka(a)slu.cz
>>> <mailto:jiri.slezka@slu.cz>
>>> >> <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>>>
>wrote:
>>> >>
>>> >> On 7/23/20 11:03 AM, Ales Musil wrote:
>>> >> >
>>> >> >
>>> >> > On Thu, Jul 23, 2020 at 10:35 AM Jiří Sléžka
>>> <jiri.slezka(a)slu.cz <mailto:jiri.slezka@slu.cz>
>>> >> <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>>
>>> >> > <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>
>>> <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>>>> wrote:
>>> >> >
>>> >> > Hi,
>>> >> >
>>> >> > On 7/23/20 8:38 AM, Ales Musil wrote:
>>> >> > >
>>> >> > >
>>> >> > > On Wed, Jul 22, 2020 at 9:41 PM Jiří Sléžka
>>> >> <jiri.slezka(a)slu.cz <mailto:jiri.slezka@slu.cz>
>>> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>
>>> >> > <mailto:jiri.slezka@slu.cz
><mailto:jiri.slezka@slu.cz>
>>> <mailto:jiri.slezka@slu.cz <mailto:jiri.slezka@slu.cz>>>
>>> >> > > <mailto:jiri.slezka@slu.cz
>>> <mailto:jiri.slezka@slu.cz> <mailto:jiri.slezka@slu.cz
>>> <mailto:jiri.slezka@slu.cz>>
>>> >> <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>
>>> <mailto:jiri.slezka@slu.cz
<mailto:jiri.slezka@slu.cz>>>>>
>wrote:
>>> >> > >
>>> >> > > Hello,
>>> >> > >
>>> >> > >
>>> >> > > Hi,
>>> >> > >
>>> >> > >
>>> >> > > CentOS8, oVirt 4.4.1.10-1.el8
>>> >> > >
>>> >> > > I am trying to setup active-backup
(mode=1)
>>> bonding mode
>>> >> with
>>> >> > custom
>>> >> > > properties. I have one 10GE switch, the
>second is
>>> just 1G.
>>> >> > 10GE link is
>>> >> > > the primary one.
>>> >> > >
>>> >> > > cat
>/etc/sysconfig/network-scripts/ifcfg-bond0
>>> >> > >
>>> >> > >
>>> >> > > first of all in oVirt 4.4 the network-scripts
are
>not
>>> relevant
>>> >> > anymore.
>>> >> > > More relevant is output from 'nmstatectl
show'.
>>> >> >
>>> >> > thanks, I believed that ifcfg files still
describes
>saved
>>> >> interface
>>> >> > configuration (even on nm managed interfaces)...
>>> >> >
>>> >> >
>>> >> > It does but it might not be that detailed as we would
>have
>>> hoped for.
>>> >> > Another reason why I said that it is not relevant is
of
>>> course if
>>> >> > someone tries
>>> >> > reconfigure the interface through network-scripts.
>>> >>
>>> >> well, honestly I did that (modified ifcfg and then use
>nmcli con
>>> >> reload). So right way is using nmcli con modify command?
>>> >>
>>> >>
>>> >> Yes or nmstate. Just be aware that anything that you do to
>interface
>>> >> outside of oVirt can have harmful impacts on the host and
>overall
>>> oVirt
>>> >> state.
>>> >>
>>> >>
>>> >>
>>> >> > from nmstatectl show I can see that bond0 has
>specified mac
>>> >> address
>>> >> >
>>> >> >
>>> >>
>>>
> https://paste.slu.cz/?d363cf2c029f6b83#Ew2rCiYyNGrdfffy6bvzSjbb8x4jJsaUdhxkjwThMFka
>>> >> >
>>> >> > > BONDING_OPTS="active_slave=ens5
downdelay=0
>>> miimon=100
>>> >> > > mode=active-backup primary=ens5
updelay=0"
>>> >> > > TYPE=Bond
>>> >> > > BONDING_MASTER=yes
>>> >> > > PROXY_METHOD=none
>>> >> > > BROWSER_ONLY=no
>>> >> > > IPV4_FAILURE_FATAL=no
>>> >> > > IPV6_DISABLED=yes
>>> >> > > IPV6INIT=no
>>> >> > > NAME=bond0
>>> >> > > UUID=c054364e-47cf-47ee-a7fc-70b37c9977e7
>>> >> > > DEVICE=bond0
>>> >> > > ONBOOT=yes
>>> >> > > MTU=9000
>>> >> > >
>>> >> > > When I try to add a custom parameter
>>> "fail_over_mac=active"
>>> >> > (which I
>>> >> > > believe could solve my problems with
stalled
>mac
>>> >> addresses in
>>> >> > switch's
>>> >> > > cam table in case of failover) I got...
>>> >> > >
>>> >> > > "Error while executing action
>HostSetupNetworks:
>>> Unexpected
>>> >> > exception"
>>> >> > >
>>> >> > > ...in manager. In the engine.log it looks
>like
>>> >> > >
>>> >> > > 2020-07-22 21:20:35,774+02 WARN
>>> >> > >
>>> >> >
>>> >>
>>>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
>>> >> > > (default task-8)
>>> [da1984f3-f38b-4e0a-ac80-a81e67d73ff0]
>>> >> Unexpected
>>> >> > > return value: Status [code=-32603,
>>> message=Internal JSON-RPC
>>> >> > error:
>>> >> > > {'reason': 'MAC address cannot
be specified
>in bond
>>> >> interface
>>> >> > along with
>>> >> > > specified bond options'}]
>>> >> > > 2020-07-22 21:20:35,774+02 ERROR
>>> >> > >
>>> >> >
>>> >>
>>>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
>>> >> > > (default task-8)
>>> [da1984f3-f38b-4e0a-ac80-a81e67d73ff0]
>>> >> Failed in
>>> >> > > 'HostSetupNetworksVDS' method
>>> >> > > 2020-07-22 21:20:35,774+02 WARN
>>> >> > >
>>> >> >
>>> >>
>>>
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HostSetupNetworksVDSCommand]
>>> >> > > (default task-8)
>>> [da1984f3-f38b-4e0a-ac80-a81e67d73ff0]
>>> >> Unexpected
>>> >> > > return value: Status [code=-32603,
>>> message=Internal JSON-RPC
>>> >> > error:
>>> >> > > {'reason': 'MAC address cannot
be specified
>in bond
>>> >> interface
>>> >> > along with
>>> >> > > specified bond options'}]
>>> >> > > 2020-07-22 21:20:35,811+02 ERROR
>>> >> > >
>>> >> >
>>> >>
>>>
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> >> > > (default task-8)
>>> [da1984f3-f38b-4e0a-ac80-a81e67d73ff0]
>>> >> EVENT_ID:
>>> >> > > VDS_BROKER_COMMAND_FAILURE(10,802), VDSM
>>> >> ovirt-hci01.mch.local
>>> >> > command
>>> >> > > HostSetupNetworksVDS failed: Internal
>JSON-RPC error:
>>> >> > {'reason': 'MAC
>>> >> > > address cannot be specified in bond
interface
>>> along with
>>> >> > specified bond
>>> >> > > options'}
>>> >> > >
>>> >> > >
>>> >> > > Can you please share supervdsm.log from the
>relevant
>>> host?
>>> >> >
>>> >> > here it is
>>> >> >
>>> >> >
>>> >> https://paste.slu.cz/?ef8bd7eea
>>>
><https://paste.slu.cz/?ef8bd7eeae8eeaed#Ej3MXRufm6Y9qjKCkgCXXieP132kRAiswR17ygxQDhft>e8eeaed#Ej3MXRufm6Y9qjKCkgCXXieP132kRAiswR17ygxQDhft
>>>
><https://paste.slu.cz/?ef8bd7eeae8eeaed#Ej3MXRufm6Y9qjKCkgCXXieP132kRAiswR17ygxQDhft>
>>> >> >
>>> >> >
>>> >> > This indeed seems to be a bug. Can you please open BZ
>>> >> >
><https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm>?
>>> >>
>>> >> done
>>> >>
>>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1859890
>>> >>
>>> >> > Only workaround that comes to my mind is to break the
>bond and
>>> >> define it
>>> >> > again with specified mode.
>>> >>
>>> >> just for sure - do you mean on the live host which hosts
>>> running HE vm
>>> >> or on a new installation?
>>> >>
>>> >>
>>> >> The installation won't work because we enforce MAC under
>these
>>> conditions:
>>> >>
>>> >> 1) The bond exists
>>> >> 2) The bond is owned by vdsm and MAC is specified in config
>(This
>>> is the
>>> >> case when you try to update the bond)
>>> >> or the bond is not owned by us (Installation process)
>>> >>
>>> >> This is basically to make sure that we preserve the MAC
>address
>>> that was
>>> >> set.
>>> >>
>>> >> The only option that I see right now is to remove the bond
>from
>>> running
>>> >> host through oVirt and create it again with active-backup=1.
>>> >> This is not a perfect solution as any further change to this
>bond
>>> will
>>> >> automatically end up in the categories above.
>>> >
>>> > well, I ended in state where oVirt thinks that ens5 is used
>outside of
>>> > bond0. It happended probably when I tried to break bond0 and
>setup
>>> ens5
>>> > witch at least ovirtmgmt network to not to loose access (it
>failed too
>>> > with "Invalid Network name").
>>> >
>>> > see
http://mirror.slu.cz/tmp/ovirt_setup_network.gif
>>> >
>>> > It looks like oVirt manager setup network section is not much
>BFU
>>> proof ;-)
>>> >
>>> > well, I am stucked now. I will try to reinstall one host but
>without
>>> > prepared bond interface and will try to setup bond completly
>in oVirt.
>>> >
>>> > ...but after lunch ;-)
>>>
>>> well, I ended in kind of loop. I have two hosts in HCI config
>with
>>> glusterfs (I know that replica 2 is not supported, I will add
>third host
>>> later). I migrated HE vm to other host, removed bricks on the
>first
>>> host, removed the first host, hosted_engine_cleanuped it, yum
>removed
>>> vdsm and all dependencies, nmcli con removed all interfaces
>>> configuration, connect new cable to unused interface and
>configure ip
>>> address from (ovirt)management vlan on it, rebooted.
>>>
>>> Then I try to add this host to oVirt using that management ip
>address,
>>> it worked but the network configurations now looks the same like
>before
>>> removing of host (and I cannot form bond0 because oVirt thinks
>that
>>> interface is already used and also I cannot move ovirtmgmt
>network to
>>> mentioned dedicated interface because from ovirt's point of view
>ovirt
>>> lost connectivity).
>>>
>>>
>>> Doing stuff around ovirtmgmt is always tricky. About the point that
>>> oVirt thinks
>>> that interface is used. Is it engine error or is it returned from
>vdsm?
>>
>> vdsm
>>
>>
>https://paste.slu.cz/?255b6d27e75e52e8#DChhQtSUbc558WRZmJpHDikQ7Xti4NyAcPwRHHKAceB3
>>
>> at the start there is (old, remembered, broken) ens5 interface with
>> subinterfaces with vlan in it like ens5.892 which is part of
>ovirtmgmt
>> bridge
>>
>>> Is it possible, that network configuration of removed host will
>survive
>>> somewhere? On the host itself? (for example some vdsm
>configuration
>>> which is not cleaned during uninstall?) Or in the engine
>database? (like
>>> not all information about removed host is deleted during
>removing?)
>>>
>>>
>>> Well the only location that comes to my mind related to the network
>>> configuration
>>> is '/var/lib/vdsm/' specifically three folders
'persistence',
>'staging',
>>> 'netconfback'.
>>> The engine database is highly unlikely to store the configuration.
>>
>> ok, thanks. It looks like my current config is broken at level where
>is
>> better to do clean install of all components.
>>
>> Main point for me is NOT to configure bond before oVirt installation.
>> This is probably valid only for mode=1 and custom bond options.
>
>just for record... I recovered my current cluster with lot of hackery.
>I
>have learned much (mainly about nmstatectl and glusterfs), it was
>useful :-)
>
>btw. its probably just my problem but after removing host, uninstalling
>vdsm* packages and adding it back to oVirt it looks like package
>vdsm-gluster was not installed regardless I have configured "enable
>gluster service" at cluster level.
>
>It looks like dependency to me and without this package host cannot be
>activated with error "Gluster command [<UNKNOWN>] failed on server
><UNKNOWN>." and "VDSM ovirt-hci01.mch.local command
>ManageGlusterServiceVDS failed: The method does not exist or is not
>available: {'method': 'GlusterService.action'}"
>
>Cheers,
>
>Jiri
>
>
>>
>> Thanks for support
>>
>> Jiri
>>
>>> I am thinking how to workaround this issue without completely
>reinstall
>>> HE and/or hosts (I can do it, at this point it is just lab
>installation
>>> but it takes some effort and time...)
>>>
>>> Thanks for
>>>
>>> >
>>> > Cheers,
>>> >
>>> > Jiri
>>> >
>>> >>
>>> >>
>>> >> Hopefully this helps.
>>> >>
>>> >> Thanks,
>>> >> Ales
>>> >>
>>> >>
>>> >>
>>> >> It looks like there is the same o similar bug while
>>> installing hosted
>>> >> engine via ovirt-hosted-engine-setup. I tried it many
>times
>>> yesterday
>>> >> and installation failed with the same error
>>> >>
>>> >> [ ERROR ] fatal: [localhost]: FAILED! =>
{"changed":
>false,
>>> "msg": "The
>>> >> host has been set in non_operational status, deployment
>>> errors: code
>>> >> 505: Host ovirt-hci01.stud.slu.cz
>>> <
http://ovirt-hci01.stud.slu.cz>
><http://ovirt-hci01.stud.slu.cz>
>>> >> installation failed. Failed to
>>> >> configure management network on the host., code 1120:
>>> Failed to
>>> >> configure management network on host
>ovirt-hci01.stud.slu.cz
>>> <
http://ovirt-hci01.stud.slu.cz>
>>> >> <http://ovirt-hci01.stud.slu.cz> due to
>>> >> setup networks failure., code 9000: Failed to verify
>Power
>>> Management
>>> >> configuration for Host ovirt-hci01.stud.slu.cz
>>> <
http://ovirt-hci01.stud.slu.cz>
>>> >> <http://ovirt-hci01.stud.slu.cz>., code 10802:
VDSM
>>> >> ovirt-hci01.stud.slu.cz
<
http://ovirt-hci01.stud.slu.cz>
>>> <
http://ovirt-hci01.stud.slu.cz> command
>>> >> HostSetupNetworksVDS failed: Internal
>>> >> JSON-RPC error: {'reason': 'MAC address cannot
be
>specified
>>> in bond
>>> >> interface along with specified bond options'}, fix
>>> accordingly and
>>> >> re-deploy."}
>>> >>
>>> >> My workaround was to remove fail_over_mac=active option
>during
>>> >> install :-)
>>> >>
>>> >> Jiri
>>> >>
>>> >> >
>>> >> > Let me know if it works.
>>> >> >
>>> >> > Thanks,
>>> >> > Ales
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > Cheers,
>>> >> >
>>> >> > Jiri
>>> >> >
>>> >> >
>>> >> > > Could anybody explain me what 'MAC
address
>cannot be
>>> >> specified
>>> >> > in bond
>>> >> > > interface along with specified bond
options'
>means? I
>>> >> believe
>>> >> > a MAC
>>> >> > > address is not configured in interface
>configuration.
>>> >> > >
>>> >> > > Or does it mean
'fail_over_mac=active' is not
>>> supported
>>> >> in oVirt?
>>> >> > >
>>> >> > > Thanks in advance,
>>> >> > >
>>> >> > > Jiri
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > >
> _______________________________________________
>>> >> > > Users mailing list -- users(a)ovirt.org
>>> <mailto:users@ovirt.org>
>>> >> <mailto:users@ovirt.org
<mailto:users@ovirt.org>>
>>> <mailto:users@ovirt.org <mailto:users@ovirt.org>
>>> >> <mailto:users@ovirt.org
<mailto:users@ovirt.org>>>
>>> >> > <mailto:users@ovirt.org
<mailto:users@ovirt.org>
>>> <mailto:users@ovirt.org <mailto:users@ovirt.org>>
>>> >> <mailto:users@ovirt.org <mailto:users@ovirt.org>
>>> <mailto:users@ovirt.org <mailto:users@ovirt.org>>>>
>>> >> > > To unsubscribe send an email to
>>> users-leave(a)ovirt.org <mailto:users-leave@ovirt.org>
>>> >> <mailto:users-leave@ovirt.org
><mailto:users-leave@ovirt.org>>
>>> >> > <mailto:users-leave@ovirt.org
>>> <mailto:users-leave@ovirt.org> <mailto:users-leave@ovirt.org
>>> <mailto:users-leave@ovirt.org>>>
>>> >> > > <mailto:users-leave@ovirt.org
>>> <mailto:users-leave@ovirt.org>
>>> >> <mailto:users-leave@ovirt.org
><mailto:users-leave@ovirt.org>>
>>> <mailto:users-leave@ovirt.org <mailto:users-leave@ovirt.org>
>>> >> <mailto:users-leave@ovirt.org
><mailto:users-leave@ovirt.org>>>>
>>> >> > > Privacy Statement:
>>>
https://www.ovirt.org/privacy-policy.html
>>> >> > > oVirt Code of Conduct:
>>> >> > >
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> >> > > List Archives:
>>> >> > >
>>> >> >
>>> >>
>>>
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BUGNSEBD3OBSUPASLJQYYJIF5767XMDE/
>>> >> > >
>>> >> > >
>>> >> > >
>>> >> > > Thank you.
>>> >> > > Regards,
>>> >> > > Ales
>>> >> > >
>>> >> > > --
>>> >> > >
>>> >> > > Ales Musil
>>> >> > >
>>> >> > > Software Engineer - RHV Network
>>> >> > >
>>> >> > > Red Hat EMEA <
https://www.redhat.com>
>>> >> > >
>>> >> > > amusil(a)redhat.com
<mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>>
>>> >> <mailto:amusil@redhat.com
<mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>>>
>>> >> > <mailto:amusil@redhat.com
><mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>>
>>> >> <mailto:amusil@redhat.com
<mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com
<mailto:amusil@redhat.com>>>> IM:
>amusil
>>> >> > >
>>> >> > > <
https://red.ht/sig>
>>> >> > >
>>> >> >
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> >
>>> >> > Ales Musil
>>> >> >
>>> >> > Software Engineer - RHV Network
>>> >> >
>>> >> > Red Hat EMEA <
https://www.redhat.com>
>>> >> >
>>> >> > amusil(a)redhat.com <mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>>
>>> >> <mailto:amusil@redhat.com
<mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>>>
IM:
>amusil
>>> >> >
>>> >> > <
https://red.ht/sig>
>>> >> >
>>> >>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >>
>>> >> Ales Musil
>>> >>
>>> >> Software Engineer - RHV Network
>>> >>
>>> >> Red Hat EMEA <
https://www.redhat.com>
>>> >>
>>> >> amusil(a)redhat.com <mailto:amusil@redhat.com>
>>> <mailto:amusil@redhat.com <mailto:amusil@redhat.com>> IM:
>amusil
>>> >>
>>> >> <
https://red.ht/sig>
>>> >>
>>> >
>>> >
>>> >
>>> > _______________________________________________
>>> > Users mailing list -- users(a)ovirt.org
<mailto:users@ovirt.org>
>>> > To unsubscribe send an email to users-leave(a)ovirt.org
>>> <mailto:users-leave@ovirt.org>
>>> > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> > oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> > List Archives:
>>>
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/GEYY5I6Y5IQYRWBFPMSKSRE6734LYL4L/
>>> >
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Ales Musil
>>>
>>> Software Engineer - RHV Network
>>>
>>> Red Hat EMEA <
https://www.redhat.com>
>>>
>>> amusil(a)redhat.com <mailto:amusil@redhat.com> IM: amusil
>>>
>>> <
https://red.ht/sig>
>>>
>>
>>
>>
>> _______________________________________________
>> Users mailing list -- users(a)ovirt.org
>> To unsubscribe send an email to users-leave(a)ovirt.org
>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/RHYPQ4D27TD3BN4XZLSO5TNL3SFLR5YD/
>>