[ovirt-users] How to setup host network to link ovirtmgmt with eth0 automatically?
Ondřej Svoboda
ondrej at svobodasoft.cz
Wed May 25 12:45:56 UTC 2016
Kai,
The symptoms after removal of vdsm-hook-ovs look just like the ovirtmgmt
bridged network was not created. The log to look in would be
supervdsm.log, but it looks like yours has been rotated after supervdsmd
was restarted (the warning is benign). Can you dig out "supervdsm.log.1"?
OVS started to be supported by the hook initially (this was the first
attempt at supporting OVS). Currently, proper OVS support is in
development and should appear in 4.0 as a tech-preview.
Thanks,
Ondra (osvoboda at redhat.com)
On 25.5.2016 11:01, Kai Kang wrote:
>
>
> On Tue, May 24, 2016 at 8:43 PM, Ondřej Svoboda <ondrej at svobodasoft.cz
> <mailto:ondrej at svobodasoft.cz>> wrote:
>
> Hi again, (correcting myself a bit)
>
> are using the Open vSwitch hook intentionally? If not, you can
> |yum remove vdsm-hook-ovs| for a while. What is your exact VDSM
> version?
>
>
> My VDSM version is 4.17.24 and ovirt-engine is 3.6.4. When after
> remove ovs hooks, it still fails with:
>
> "Host node3 does not comply with the cluster Default networks, the
> following networks are missing on host: 'ovirtmgmt'".
>
> I checked vdsm.log it seems no erros there. And in supervdsm.log, it
> shows warnings:
>
> sourceRoute::INFO::2016-05-25
> 07:57:41,779::sourceroutethread::60::root::(process_IN_CLOSE_WRITE_filePath)
> interface ovirtmgmt is not a libvirt interface
> sourceRoute::WARNING::2016-05-25
> 07:57:41,780::utils::140::root::(rmFile) File:
> /var/run/vdsm/trackedInterfaces/ovirtmgmt already removed
>
>
> What should I check next? Thanks.
>
> I uploaded vdsm.log to:
>
> https://gist.github.com/parr0tr1ver/1e38171b5d12cf77321101530276d368
>
> and supervdsm.log:
>
> https://gist.github.com/parr0tr1ver/97805698a485f1cd49ded2b095297531
>
>
> Also, can you give a little more context from your vdsm.log, and
> also from supervdsm.log? I think the vdsm.log failure is only
> related to stats reporting, and is not the root problem.
>
> If you don't have any confidential information in the logs (or you
> can remove it), I suggest that you open a bug on product=vdsm at
> https://bugzilla.redhat.com/ Have no fear about naming the bug, it
> can be renamed if necessary.
>
>
>
> I find there is a bug on
> https://bugzilla.redhat.com/show_bug.cgi?id=1234867. The "Target
> Milestone" to "support ovs via vdsm hook" is ovirt-3.6.7. Does that
> mean ovs does not work actually in previous versions?
>
> Thanks a lot.
>
> --Kai
>
>
>
> Thanks, Ondra
>
>> On 24.5.2016 11:47, Kai Kang wrote:
>>> Hi,
>>>
>>> I checked vdsm.log, it show error:
>>>
>>> jsonrpc.Executor/0::DEBUG::2016-05-24
>>> 09:46:04,056::utils::671::root::(execCmd) /usr/bin/taskset
>>> --cpu-list 0-7 /usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs
>>> (cwd None)
>>> jsonrpc.Executor/0::DEBUG::2016-05-24
>>> 09:46:04,136::utils::689::root::(execCmd) FAILED: <err> =
>>> 'Traceback (most recent call last):\n File
>>> "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line 72, in
>>> <module>\n main()\n File
>>> "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line 66, in
>>> main\n
>>> stats[\'network\'].update(ovs_networks_stats(stats[\'network\']))\nKeyError:
>>> \'network\'\n\n'; <rc> = 2
>>> jsonrpc.Executor/0::INFO::2016-05-24
>>> 09:46:04,136::hooks::98::root::(_runHooksDir) Traceback (most
>>> recent call last):
>>> File "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line
>>> 72, in <module>
>>> main()
>>> File "/usr/lib64/vdsm/vdsm/hooks/after_get_stats/50_ovs", line
>>> 66, in main
>>> stats['network'].update(ovs_networks_stats(stats['network']))
>>> KeyError: 'network'
>>>
>>>
>>> I'm checking what wrong with it.
>>>
>>> Thanks,
>>> Kai
>>>
>>>
>>>
>>> On Tue, May 24, 2016 at 3:57 PM, Kai Kang <kai.7.kang at gmail.com
>>> <mailto:kai.7.kang at gmail.com>> wrote:
>>>
>>> And network configurations on node:
>>>
>>> [root at ovirt-node] # brctl show
>>> bridge name bridge id STP enabled interfaces
>>> docker0 8000.0242ae1de711 no
>>> ovirtmgmt 8000.001320ff73aa no eth0
>>>
>>> [root at ovirt-node] # ifconfig -a
>>> docker0 Link encap:Ethernet HWaddr 02:42:ae:1d:e7:11
>>> inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
>>> UP BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>> eth0 Link encap:Ethernet HWaddr 00:13:20:ff:73:aa
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:6331 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:2866 errors:0 dropped:0 overruns:0
>>> carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:762182 (744.3 KiB) TX bytes:210611
>>> (205.6 KiB)
>>> Interrupt:20 Memory:d1100000-d1120000
>>>
>>> lo Link encap:Local Loopback
>>> inet addr:127.0.0.1 Mask:255.0.0.0
>>> inet6 addr: ::1/128 Scope:Host
>>> UP LOOPBACK RUNNING MTU:65536 Metric:1
>>> RX packets:36 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:2478 (2.4 KiB) TX bytes:2478 (2.4 KiB)
>>>
>>> ovirtmgmt Link encap:Ethernet HWaddr 00:13:20:ff:73:aa
>>> inet addr:128.224.165.170 Bcast:128.224.165.255
>>> Mask:255.255.255.0
>>> inet6 addr: fe80::213:20ff:feff:73aa/64 Scope:Link
>>> inet6 addr:
>>> 11:2233:4455:6677:213:20ff:feff:73aa/64 Scope:Global
>>> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
>>> RX packets:6295 errors:0 dropped:6 overruns:0 frame:0
>>> TX packets:2831 errors:0 dropped:0 overruns:0
>>> carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:644421 (629.3 KiB) TX bytes:177616
>>> (173.4 KiB)
>>>
>>> sit0 Link encap:UNSPEC HWaddr
>>> 00-00-00-00-32-33-33-00-00-00-00-00-00-00-00-00
>>> NOARP MTU:1480 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:0
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>> wlan0 Link encap:Ethernet HWaddr 80:86:f2:8b:1d:cf
>>> BROADCAST MULTICAST MTU:1500 Metric:1
>>> RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>>> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>> collisions:0 txqueuelen:1000
>>> RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
>>>
>>>
>>> Thanks,
>>> Kai
>>>
>>>
>>>
>>> On Tue, May 24, 2016 at 3:36 PM, Kai Kang
>>> <kai.7.kang at gmail.com <mailto:kai.7.kang at gmail.com>> wrote:
>>>
>>> Hi,
>>>
>>> When I install a host, it fails with:
>>>
>>> 2016-05-24 07:00:01,749 ERROR
>>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand]
>>> (DefaultQuartzScheduler_Worker-4) [1bf36cd4] Host
>>> 'node3' is set to Non-Operational, it is missing the
>>> following networks: 'ovirtmgmt'
>>> 2016-05-24 07:00:01,781 WARN
>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>>> (DefaultQuartzScheduler_Worker-4) [1bf36cd4] Correlation
>>> ID: 1bf36cd4, Job ID:
>>> db281e8f-67cc-441a-b44c-90b135e509bd, Call Stack: null,
>>> Custom Event ID: -1, Message: Host node3 does not comply
>>> with the cluster Default networks, the following
>>> networks are missing on host: 'ovirtmgmt'
>>>
>>> After I click the "Hosts" -> subtab "Network Interfaces"
>>> -> "Setup Host Networks" on the web ui and drag
>>> ovirtmgmt to "Assigned Logical" to link with eth0, then
>>> activate host "node3" successfully.
>>>
>>> My question is how to make such manual operation
>>> automatically? Then I can run some automatic tests.
>>>
>>> Thanks a lot.
>>>
>>> --Kai
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org <mailto:Users at ovirt.org>
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160525/4562f9b3/attachment-0001.html>
More information about the Users
mailing list