[Users] ovirt-node post-reboot no persistent networks config .. !!! urgent update req for POC coming weekend

Sven Kieske S.Kieske at mittwald.de
Wed Oct 2 11:26:55 UTC 2013


Hi,

we were now able to test this.

The IP does not show up in ovirt-node-setup TUI, however if you drop to
console you see this:

[root at vroot4 ~]# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:32:e4:88 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP qlen 1000
    link/ether 00:25:90:32:e4:89 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::225:90ff:fe32:e489/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:32:e4:8a brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:25:90:32:e4:8b brd ff:ff:ff:ff:ff:ff
8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UNKNOWN
    link/ether 00:25:90:32:e4:89 brd ff:ff:ff:ff:ff:ff
    inet 10.0.1.101/24 brd 10.0.1.255 scope global ovirtmgmt
    inet6 fe80::225:90ff:fe32:e489/64 scope link
       valid_lft forever preferred_lft forever
10: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 86:f2:1e:01:7d:4a brd ff:ff:ff:ff:ff:ff
11: bond4: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
12: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
13: bond1: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
14: bond2: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
15: bond3: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff


So the address gets switched from eth1 to the virtual nic "ovirtmgmt"
and does not show up in the TUI.

Is this an intended behavior?
Furthermore, for what reason does ovirt create all those bonding devices?

We installed the el6 variant of the iso.

An additional info, which I noticed:

lsb_release -a does not provide a correct output, it just states:

"RedHatEnterpriseVirtualizationHypervisor" which is not very
informative, no version information or whatsoever.

Regards

Sven

On 02/10/13 11:16, Fabian Deutsch wrote:
> Am Dienstag, den 01.10.2013, 15:05 +0000 schrieb Sven Kieske:
>> Hi,
>>
>> we encountered the same problem as Anil.
>>
>> We wanted to give this nightly iso a try:
>> http://jenkins.ovirt.org/job/node-devel/825/distro=centos64/artifact/ovirt-node-iso-3.1.0-0.999.825.el6.iso
>>
>> but it seems it doesn't contain the required vdsm-plugin?
> 
> True. The plugin is missing.
> 
> I've prepared two isos (but untested) which address this bug:
> http://fedorapeople.org/~fabiand/node/ovirt-node-iso-3.0.1-1.0.201310020841draft.vdsm.el6.iso
> http://fedorapeople.org/~fabiand/node/ovirt-node-iso-3.1.0-0.999.201310020841draft.vdsm.fc19.iso
> 
> They are basically the base iso (found in jenkins) plus an edit-node
> run:
> /edit-node --repo edit-node-el6.repo --install ovirt-node-plugin-vdsm
> ovirt-node-iso-3.0.1-1.0.201310020841draft.el6.iso
> 
> Please let me know if they fix the problem.
> 
> Greetings
> fabian
> 
>> So any help on this topic would be appreciated.
>>
>> Greetings
>>
>> Sven
>>
>> On 30/09/13 20:52, Fabian Deutsch wrote:
>>
>>>
>>> Hey Anil,
>>>
>>> you were doing the right thing to persist the network cfg files. It
>>> might have been node that was to greedy when reconfiguring DNS :)
>>>
>>> You might take a look at this fix:
>>> http://gerrit.ovirt.org/19705
>>> It is untested but at least I could reproduce the behavior you were
>>> seeing and prepared the patch based on that findings.
>>> An ISO containing that fix will sooner or later land here:
>>> http://jenkins.ovirt.org/job/node-devel/
>>>
>>> (The queue is currently quite long, which can take a day until the patch
>>> above is turned into an ISO).
>>>
>>> Long story short, please check if this patch fixes your problem, I'll
>>> also try to take a look at it when an iso is ready.
>>>
>>> And file a bug if you want to track the state of this issue.
>>>
>>> Greetings
>>> fabian
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> 
> 


More information about the Users mailing list