can oVirt 3.6 manage 3.5 hypervizor
by paf1@email.cz
This is a multi-part message in MIME format.
--------------080008030808040800040009
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
can oVirt 3.6 manage hypervizors with 3.5 version ?
Meaning during cluster upgrade step by step.
( A) oVirt mgmt , B) 1st.hypervizor, C) 2nd hypervizor, .... .. )
If oVirt DB converted from 3.5.5 -> 3.5.5.upg.3.6 -> final 3.6
regs. Paf1
--------------080008030808040800040009
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hi, <br>
can oVirt 3.6 manage hypervizors with 3.5 version ? <br>
Meaning during cluster upgrade step by step.<br>
( A) oVirt mgmt , B) 1st.hypervizor, C) 2nd hypervizor, .... ..
)<br>
If oVirt DB converted from 3.5.5 -> 3.5.5.upg.3.6 -> final 3.6<br>
regs. Paf1<br>
</body>
</html>
--------------080008030808040800040009--
9 years
volume parameters
by paf1@email.cz
This is a multi-part message in MIME format.
--------------040806050807030204000908
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hello,
would U recommend me set of functional params for volume replica 2 ?
Old ones was ( for 3.5.2 gluster version )
storage.owner-uid 36
storage.owner-gid 36
performance.io-cache off
performance.read-ahead off
network.remote-dio enable
cluster.eager-lock enable
performance.stat-prefetch off
performance.quick-read off
cluster.quorum-count 1
cluster.server-quorum-type none
cluster.quorum-type fixed
after upgrade to 3.5.7 version and setting default recommendation,
volumes became inaccessable ( permission denied - fixed by owner uid/gui
settings to 36)
Why the defaults have been changed ?
Just still Error / Critical messages occure ( examples follow )
*E* - list of grep etc-glusterfs-glusterd.vol.log
[2015-11-07 10:49:10.883564] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:49:10.886152] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:49:15.954942] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa88b014a66] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fa88addf9be] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa88addface] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa88ade148c]
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fa88ade1c98] )))))
0-management: forced unwinding frame type(Peer mgmt) op(--(2)) called at
2015-11-07 10:49:10.918764 (xid=0x5)
[2015-11-07 10:49:26.719176] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:54:59.738232] E [MSGID: 106243] [glusterd.c:1623:init]
0-management: creation of 1 listeners failed, continuing with succeeded
transport
[2015-11-07 10:55:01.860991] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:55:01.863932] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
[2015-11-07 10:55:01.866779] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument
*C* - list of grep etc-glusterfs-glusterd.vol.log
[2015-11-07 10:49:16.045778] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume 1KVM12-P4. Starting
local bricks.
[2015-11-07 10:49:16.049319] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume 1KVM12-P5. Starting
local bricks.
regs.Paf1
--------------040806050807030204000908
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body text="#000066" bgcolor="#FFFFFF">
Hello, <br>
would U recommend me set of functional params for volume replica 2 ?<br>
Old ones was ( for 3.5.2 gluster version )<br>
storage.owner-uid 36<br>
storage.owner-gid 36<br>
performance.io-cache off<br>
performance.read-ahead off<br>
network.remote-dio enable<br>
cluster.eager-lock enable<br>
performance.stat-prefetch off<br>
performance.quick-read off<br>
cluster.quorum-count 1<br>
cluster.server-quorum-type none<br>
cluster.quorum-type fixed<br>
<br>
after upgrade to 3.5.7 version and setting default recommendation,
volumes became inaccessable ( permission denied - fixed by owner
uid/gui settings to 36)<br>
Why the defaults have been changed ?<br>
Just still Error / Critical messages occure ( examples follow )<br>
<br>
*E* - list of grep etc-glusterfs-glusterd.vol.log<br>
[2015-11-07 10:49:10.883564] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-07 10:49:10.886152] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-07 10:49:15.954942] E [rpc-clnt.c:362:saved_frames_unwind]
(-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x196)[0x7fa88b014a66]
(-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fa88addf9be]
(-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fa88addface]
(-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x9c)[0x7fa88ade148c]
(--> /lib64/libgfrpc.so.0(rpc_clnt_notify+0x88)[0x7fa88ade1c98]
))))) 0-management: forced unwinding frame type(Peer mgmt) op(--(2))
called at 2015-11-07 10:49:10.918764 (xid=0x5)<br>
[2015-11-07 10:49:26.719176] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-07 10:54:59.738232] E [MSGID: 106243]
[glusterd.c:1623:init] 0-management: creation of 1 listeners failed,
continuing with succeeded transport<br>
[2015-11-07 10:55:01.860991] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-07 10:55:01.863932] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
[2015-11-07 10:55:01.866779] E [socket.c:2965:socket_connect]
0-management: Failed to set keep-alive: Invalid argument<br>
<br>
*C* - list of grep etc-glusterfs-glusterd.vol.log<br>
[2015-11-07 10:49:16.045778] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume 1KVM12-P4. Starting
local bricks.<br>
[2015-11-07 10:49:16.049319] C [MSGID: 106003]
[glusterd-server-quorum.c:346:glusterd_do_volume_quorum_action]
0-management: Server quorum regained for volume 1KVM12-P5. Starting
local bricks.<br>
<br>
regs.Paf1<br>
<br>
<br>
</body>
</html>
--------------040806050807030204000908--
9 years
ovirt 3.6. can't add host to cluster
by David David
Hi.
i use centos 6.6, ovirt 3.6(cluster level 3.5) as engine and centos 7 as
host.
when i try add new host to cluster, then I get an error:
"Error while executing action: Cannot add Host. Connecting to host via SSH
has failed, verify that the host is reachable (IP address, routable address
etc.) You may refer to the engine.log file for further details."
manually ssh connection works with host.
DNS is configured on all hosts, and engine.
engine.log:
2015-11-09 15:00:59,601 ERROR
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-22)
[34bc561b] Failed to establish session with host 'testnode2': SSH session
closed during connection 'root(a)10.64.0.211'
2015-11-09 15:00:59,601 WARN
[org.ovirt.engine.core.bll.hostdeploy.AddVdsCommand] (default task-22)
[34bc561b] CanDoAction of action 'AddVds' failed for user admin@internal.
Reasons: VAR__ACTION__ADD,VAR__TYPE__HOST,$server
10.64.0.211,VDS_CANNOT_CONNECT_TO_SERVER
2015-11-09 15:00:59,937 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-14) [] Failed to find host on any provider by host name
'vtestengine.office.saratov'
2015-11-09 15:01:00,256 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder]
(default task-31) [] Failed to find host on any provider by host name
'vtestengine.office.saratov'
9 years
Attach disk to vm firefox
by Jonas Israelsson
Greetings.
I think I have stumbled upon a bug related to firefox.
Running released version of oVirt 3.6 and trying to attach a disk to a
vm. It is however impossible to select a disk after position 13 (from
top) in the list.
Looking at the 'table border' that I assume normally should go around
the whole window, in firefox stops a few disks from the bottom of the
list and
disks below that point can't be selected, hence not be attached.
Works in Chrome.
Running Firefox 41.0.2 under Linux (Opensuse 13.2)
It this a known issue ?
See attached snapshot.
9 years
MacSpoof with multiple VM's -> bad/slow response on 3.5.3
by Matt .
Hi Guys,
On a 3.5.3 updated cluster I see issues with macspoofed VM's.
The cluster is a CcentOS 7 cluster which always performed well.
I noticed that on my loadbalancers the macspoof true setting
disspeared in the engine and when I added it and rebooted some other
carp machines it was vanished at those machines.
It's a quite simple setup:
2 static nodes in a mutiple hosts cluster with carp machines on it,
per blade 1 Firewall, Pfsense, and one Loadbalancer ZEN.
The cluster ID's differ on ZEN, and the carp IP's on pfsense have a
different VHID's so I wondered if this is a known issue with the
vanished macspoof true setting I found out (so the whole virtual IP
doesn't work in that case).
Some other cluster works fine without any issue, the spoofed systems
here are on CentOS 6
This setup has run for more than a year without any issues.
I hope someone has some information if this issue is known.
Thanks Matt
(sorry for my bad typing it's kinda early/late ;))
9 years
Upgrade method for 3.5 node
by Phil Daws
Hello,
I have upgraded my engine to 3.6 (el6) and would now like to do the same on my 3.5 (el7) node. The node was built using a minimal Centos7 ISO and then I installed the 3.5 rpm.
How best would I go about performing the upgrade please ?
Thanks, Phil
9 years
SPICE/VNC shared consoles
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-c7ea68e9-89ad-4e47-8314-808d25584908
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
> Von: users-bounces(a)ovirt.org [users-bounces(a)ovirt.org]" im Auftrag von "B=
udur Nagaraju [nbudoor(a)gmail.com]=0A=
> Gesendet: Montag, 9. November 2015 05:53=0A=
> An: users=0A=
> Betreff: [ovirt-users] Multiple console access=0A=
> =0A=
> HI ,=0A=
> =0A=
> AM using SPICE console to access the vm console ,how to enable multi user=
console access.=0A=
> =0A=
> Thanks,=0A=
> =0A=
> Nagaraju=0A=
=0A=
Not yet possible via webadmin configuration. But you can do a small "hack" =
to achieve this. The=0A=
following 2 steps will allow shared VNC and SPICE consoles.=0A=
=0A=
1) create a new start hook e.g. /usr/libexec/vdsm/hooks/before_vm_start/spi=
ce_mc.py=0A=
=0A=
#!/usr/bin/python=0A=
=0A=
import os=0A=
import sys=0A=
import hooking=0A=
import traceback=0A=
=0A=
domxml =3D hooking.read_domxml()=0A=
=0A=
# Add header tag to allow command line manipulation=0A=
=0A=
d =3D domxml.getElementsByTagName('domain')[0]=0A=
d.setAttribute('xmlns:qemu','http://libvirt.org/schemas/domain/qemu/1.0')=
=0A=
=0A=
# Add environment variable to allow shared SPICE consoles=0A=
=0A=
c =3D domxml.createElement('qemu:commandline')=0A=
p =3D domxml.createElement('qemu:env')=0A=
p.setAttribute('name','SPICE_DEBUG_ALLOW_MC')=0A=
p.setAttribute('value','1')=0A=
c.appendChild(p)=0A=
=0A=
# Add environment variables to domain=0A=
=0A=
# Modify graphics tag to allow shared VNC consoles=0A=
=0A=
g =3D domxml.getElementsByTagName('graphics')[0]=0A=
g.setAttribute('connected','keep')=0A=
g.setAttribute('sharePolicy','ignore')=0A=
=0A=
d.appendChild(c)=0A=
=0A=
# Writeback domain=0A=
=0A=
hooking.write_domxml(domxml)=0A=
=0A=
# Script end=0A=
=0A=
2) Modify /usr/share/vdsm/rpc/BindingXMLRPC.py=0A=
=0A=
Old: return vm.setTicket(password, ttl, existingConnAction, params)=0A=
New: return vm.setTicket(password, ttl, 'keep', params)=0A=
=0A=
Markus=0A=
------=_NextPartTM-000-c7ea68e9-89ad-4e47-8314-808d25584908
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-c7ea68e9-89ad-4e47-8314-808d25584908--
9 years
Re: [ovirt-users] Manually preconfigured network interfaces/bridges on oVirt 3.6 HC HE
by Giuseppe Ragusa
On Tue, Nov 3, 2015, at 15:27, Simone Tiraboschi wrote:
>
>
> On Mon, Nov 2, 2015 at 11:55 PM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>> __
>>
>> On Mon, Nov 2, 2015, at 09:52, Simone Tiraboschi wrote:
>>>
>>>
>>> On Mon, Nov 2, 2015 at 1:48 AM, Giuseppe Ragusa <giuseppe.ragusa(a)hotmail.com> wrote:
>>>> Hi all,
>>>> I'm stuck with the following error during the final phase of ovirt-hosted-engine-setup:
>>>>
>>>> The host hosted_engine_1 is in non-operational state.
>>>> Please try to activate it via the engine webadmin UI.
>>>>
>>>> If I login on the engine administration web UI I find the corresponding message (inside NonOperational first host hosted_engine_1 Events tab):
>>>>
>>>> Host hosted_engine_1 does not comply with the cluster Default networks, the following networks are missing on host: 'ovirtmgmt'
>>>>
>>>> I'm installing with an oVirt snapshot from October the 27th on a fully-patched CentOS 7.1 host with a GlusterFS volume (3.7.5 hyperconverged, replica 3, for the engine-vm) pre-created and network interfaces/bridges (ovirtmgmt and other two bridges, called nfs and lan, on underlying 802.3ad bonds or plain interfaces) manually pre-configured in /etc/sysconfig/network-interfaces/ifcfg-* (using "classic" network service; NetworkManager disabled).
>>>>
>>>
>>> If you manually created the network bridges, the match between them and the logical network should happen on name bases.
>>
>>
>> Hi Simone,
>> many thanks fpr your help (again) :)
>>
>> As you may note from the above comment, the name should actually match (it's exactly ovirtmgmt) but it doesn't get recognized.
>>
>>
>>> If it doesn't for any reasons (please report if you find any evidence), you can manually bind logical network and network interfaces editing the host properties from the web-ui. At that point the host should become active in a few seconds.
>>
>>
>> Well, the most immediate evidence are the error messages already reported (given that the bridge is actually present, with the right name and actually working).
>> Apart from that, I find the following past logs (I don't know whether they are relevant or not):
>>
>> From /var/log/vdsm/connectivity.log:
>
>
> Can you please add also host-deploy logs?
Please find a gzipped tar archive of the whole directory /var/log/ovirt-engine/host-deploy/ at:
https://onedrive.live.com/redir?resid=74BDE216CAA3E26F!110&authkey=!AIQUc...
Many thanks again for your kind assistance.
Regards,
Giuseppe
>> 2015-11-01 21:37:21,029:DEBUG:recent_client:True
>> 2015-11-01 21:37:51,088:DEBUG:recent_client:False
>> 2015-11-01 21:38:21,146:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) d
>> ropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up spee
>> d:0 duplex:full)
>> 2015-11-01 21:38:36,174:DEBUG:recent_client:True
>> 2015-11-01 21:39:06,233:DEBUG:recent_client:False
>> 2015-11-01 21:48:22,383:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown)
>> , bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000
>> duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(opers
>> tate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>> 2015-11-01 21:48:52,450:DEBUG:recent_client:False
>> 2015-11-01 22:55:21,668:DEBUG:recent_client:True, lan:(operstate:up speed:0 dupl
>> ex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up sp
>> eed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdum
>> my;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 dup
>> lex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up s
>> peed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:
>> (operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>> 2015-11-01 22:56:00,952:DEBUG:recent_client:False, lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>> 2015-11-01 22:58:16,215:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>> 2015-11-02 00:04:54,019:DEBUG:dropped vnet0:(operstate:up speed:0 duplex:full) dropped vnet2:(operstate:up speed:0 duplex:full) dropped vnet1:(operstate:up speed:0 duplex:full)
>> 2015-11-02 00:05:39,102:DEBUG:new vnet0:(operstate:up speed:0 duplex:full) new vnet2:(operstate:up speed:0 duplex:full) new vnet1:(operstate:up speed:0 duplex:full)
>> 2015-11-02 01:16:47,194:DEBUG:recent_client:True
>> 2015-11-02 01:17:32,693:DEBUG:recent_client:True, vnet0:(operstate:up speed:0 duplex:full), lan:(operstate:up speed:0 duplex:unknown), bond0:(operstate:up speed:2000 duplex:full), bond1:(operstate:up speed:2000 duplex:full), enp0s20f1:(operstate:up speed:1000 duplex:full), ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), ovirtmgmt:(operstate:up speed:0 duplex:unknown), lo:(operstate:up speed:0 duplex:unknown), enp7s0f0:(operstate:up speed:1000 duplex:full), enp6s0f0:(operstate:up speed:100 duplex:full), enp6s0f1:(operstate:up speed:1000 duplex:full), vnet2:(operstate:up speed:0 duplex:full), nfs:(operstate:up speed:0 duplex:unknown), vnet1:(operstate:up speed:0 duplex:full), bond2:(operstate:up speed:3000 duplex:full), enp7s0f1:(operstate:up speed:1000 duplex:full), enp0s20f0:(operstate:up speed:1000 duplex:full), enp0s20f3:(operstate:up speed:1000 duplex:full), enp0s20f2:(operstate:up speed:1000 duplex:full)
>> 2015-11-02 01:18:02,749:DEBUG:recent_client:False
>> 2015-11-02 01:20:18,001:DEBUG:recent_client:True
>>
>> From /var/log/vdsm/vdsm.log:
>>
>> Thread-98::DEBUG::2015-11-01 22:55:16,991::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>> Thread-98::DEBUG::2015-11-01 22:55:16,992::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond0.
>> Thread-98::DEBUG::2015-11-01 22:55:16,994::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>> Thread-98::DEBUG::2015-11-01 22:55:16,995::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f0.
>> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>> Thread-98::DEBUG::2015-11-01 22:55:16,997::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp6s0f1.
>> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>> Thread-98::DEBUG::2015-11-01 22:55:16,999::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f0.
>> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>> Thread-98::DEBUG::2015-11-01 22:55:17,001::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f0.
>> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,003::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp7s0f1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,005::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,006::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,007::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>> Thread-98::DEBUG::2015-11-01 22:55:17,008::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f2.
>> Thread-98::DEBUG::2015-11-01 22:55:17,009::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>> Thread-98::DEBUG::2015-11-01 22:55:17,010::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on enp0s20f3.
>> Thread-98::DEBUG::2015-11-01 22:55:17,014::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>> Thread-98::DEBUG::2015-11-01 22:55:17,015::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on ovirtmgmt.
>> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,019::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond1.
>> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>> Thread-98::DEBUG::2015-11-01 22:55:17,024::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on nfs.
>> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>> Thread-98::DEBUG::2015-11-01 22:55:17,028::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on bond2.
>> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>> Thread-98::DEBUG::2015-11-01 22:55:17,033::netinfo::440::root::(_dhcp_used) There is no VDSM network configured on lan.
>>
>> And further down, always in /var/log/vdsm/vdsm.log:
>>
>> Thread-17::DEBUG::2015-11-02 01:17:18,747::__init__::533::jsonrpc.JsonRpcServer:
>> :(_serveRequest) Return 'Host.getCapabilities' in bridge with {'HBAInventory': {
>> 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5'}], 'FC': []}, '
>> packages2': {'kernel': {'release': '229.14.1.el7.x86_64', 'buildtime': 144232235
>> 1.0, 'version': '3.10.0'}, 'glusterfs-rdma': {'release': '1.el7', 'buildtime': 1
>> 444235292L, 'version': '3.7.5'}, 'glusterfs-fuse': {'release': '1.el7', 'buildti
>> me': 1444235292L, 'version': '3.7.5'}, 'spice-server': {'release': '9.el7_1.3',
>> 'buildtime': 1444691699L, 'version': '0.12.4'}, 'librbd1': {'release': '2.el7',
>> 'buildtime': 1425594433L, 'version': '0.80.7'}, 'vdsm': {'release': '2.gitdbbc5a
>> 4.el7', 'buildtime': 1445459370L, 'version': '4.17.10'}, 'qemu-kvm': {'release':
>> '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'glusterfs': {'r
>> elease': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'libvirt': {'release': '16.el7_1.4', 'buildtime': 1442325910L, 'version': '1.2.8'}, 'qemu-img': {'release': '23.el7_1.9.1', 'buildtime': 1443185645L, 'version': '2.1.2'}, 'mom': {'release': '2.el7', 'buildtime': 1442501481L, 'version': '0.5.1'}, 'glusterfs-geo-replication': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-server': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}, 'glusterfs-cli': {'release': '1.el7', 'buildtime': 1444235292L, 'version': '3.7.5'}}, 'numaNodeDistance': {'0': [10]}, 'cpuModel': 'Intel(R) Atom(TM) CPU C2750 @ 2.40GHz', 'liveMerge': 'true', 'hooks': {'before_vm_start': {'50_hostedengine': {'md5': '2a6d96c26a3599812be6cf1a13d9f485'}}}, 'vmTypes': ['kvm'], 'selinux': {'mode': '0'}, 'liveSnapshot': 'true', 'kdumpStatus': 0, 'networks': {}, 'bridges': {'ovirtmgmt': {'addr': '172.25.10.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.10.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.10.21/24'[http://172.25.10.21/24%27]], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0', 'vnet0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '83', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb37', 'bridge_id': '8000.002590f1cb37', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'lan': {'addr': '192.168.164.218', 'cfg': {'AGEING': '0', 'IPADDR': '192.168.164.218', 'GATEWAY': '192.168.164.254', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'lan', 'IPV4_FAILURE_FATAL': 'yes', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'gateway': '192.168.164.254', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['192.168.164.218/24', '192.168.164.216/24'[http://192.168.164.216/24%27]], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['vnet1', 'enp6s0f0'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '82', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.a0369f3888cd', 'bridge_id': '8000.a0369f3888cd', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '82', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}, 'nfs': {'addr': '172.25.15.21', 'cfg': {'AGEING': '0', 'DEFROUTE': 'no', 'IPADDR': '172.25.15.21', 'IPV4_FAILURE_FATAL': 'yes', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'STP': 'off', 'DEVICE': 'nfs', 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'gateway': '', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'stp': 'off', 'ipv4addrs': ['172.25.15.21/24', '172.25.15.203/24'[http://172.25.15.203/24%27]], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1', 'vnet2'], 'opts': {'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3125', 'hello_timer': '183', 'multicast_querier_interval': '25500', 'max_age': '2000', 'hash_max': '512', 'stp_state': '0', 'topology_change_detected': '0', 'priority': '32768', 'multicast_membership_interval': '26000', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'nf_call_iptables': '0', 'topology_change': '0', 'hello_time': '200', 'root_id': '8000.002590f1cb35', 'bridge_id': '8000.002590f1cb35', 'topology_change_timer': '0', 'ageing_time': '0', 'nf_call_ip6tables': '0', 'gc_timer': '83', 'nf_call_arptables': '0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval': '100', 'default_pvid': '1', 'multicast_query_interval': '12500', 'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0', 'forward_delay': '0'}}}, 'uuid': '2a1855a9-18fb-4d7a-b8b8-6fc898a8e827', 'onlineCpus': '0,1,2,3,4,5,6,7', 'nics': {'enp0s20f1': {'permhwaddr': '00:25:90:f1:cb:35', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp0s20f1', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': '00:25:90:F1:CB:35', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp7s0f0': {'permhwaddr': 'a0:36:9f:38:88:cf', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp7s0f0', 'BOOTPROTO': 'none', 'MASTER': 'bond1', 'HWADDR': 'A0:36:9F:38:88:CF', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:35', 'speed': 1000, 'gateway': ''}, 'enp6s0f0': {'addr': '', 'ipv6gateway': '::', 'ipv6addrs': ['fe80::a236:9fff:fe38:88cd/64'], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'BRIDGE': 'lan', 'NM_CONTROLLED': 'no', 'ETHTOOL_OPTS': '-K ${DEVICE} tso off ufo off gso off gro off lro off', 'DEVICE': 'enp6s0f0', 'BOOTPROTO': 'none', 'HWADDR': 'A0:36:9F:38:88:CD', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': 'a0:36:9f:38:88:cd', 'speed': 100, 'gateway': ''}, 'enp6s0f1': {'permhwaddr': 'a0:36:9f:38:88:cc', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp6s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': 'A0:36:9F:38:88:CC', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp7s0f1': {'permhwaddr': 'a0:36:9f:38:88:ce', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp7s0f1', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': 'A0:36:9F:38:88:CE', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f0': {'permhwaddr': '00:25:90:f1:cb:34', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f0', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:34', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway': ''}, 'enp0s20f3': {'permhwaddr': '00:25:90:f1:cb:37', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f3', 'BOOTPROTO': 'none', 'MASTER': 'bond0', 'HWADDR': '00:25:90:F1:CB:37', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:37', 'speed': 1000, 'gateway': ''}, 'enp0s20f2': {'permhwaddr': '00:25:90:f1:cb:36', 'addr': '', 'ipv6gateway': '::', 'ipv6addrs': [], 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'ipv4addrs': [], 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'DEVICE': 'enp0s20f2', 'BOOTPROTO': 'none', 'MASTER': 'bond2', 'HWADDR': '00:25:90:F1:CB:36', 'TYPE': 'Ethernet', 'ONBOOT': 'yes'}, 'hwaddr': '00:25:90:f1:cb:34', 'speed': 1000, 'gateway'
>> : ''}}, 'software_revision': '2', 'hostdevPassthrough': 'false', 'clusterLevels': ['3.4', '3.5', '3.6'], 'cpuFlags': 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,sse4_1,sse4_2,movbe,popcnt,tsc_deadline_timer,aes,rdrand,lahf_lm,3dnowprefetch,ida,arat,epb,dtherm,tpr_shadow,vnmi,flexpriority,ept,vpid,tsc_adjust,smep,erms,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:5ed1a874ff5', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.4', '3.5', '3.6'], 'autoNumaBalancing': 0, 'additionalFeatures': ['GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'reservedMem': '321', 'bondings': {'bond0': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=balance-rr miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb37/64'], 'active_slave': '', 'mtu': '1500', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f3', 'enp6s0f1'], 'hwaddr': '00:25:90:f1:cb:37', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100'}}, 'bond1': {'ipv4addrs': [], 'addr': '', 'cfg': {'BRIDGE': 'nfs', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb35/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '', 'dhcpv6': False, 'slaves': ['enp0s20f1', 'enp7s0f0'], 'hwaddr': '00:25:90:f1:cb:35', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}, 'bond2': {'ipv4addrs': ['172.25.5.21/24'[http://172.25.5.21/24%27]], 'addr': '172.25.5.21', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '172.25.5.21', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'static', 'BONDING_OPTS': 'mode=802.3ad xmit_hash_policy=layer2+3 miimon=100', 'DEVICE': 'bond2', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::225:90ff:fef1:cb34/64'], 'active_slave': '', 'mtu': '9000', 'dhcpv4': False, 'netmask': '255.255.255.0', 'dhcpv6': False, 'slaves': ['enp0s20f0', 'enp0s20f2', 'enp7s0f1'], 'hwaddr': '00:25:90:f1:cb:34', 'ipv6gateway': '::', 'gateway': '', 'opts': {'miimon': '100', 'mode': '4', 'xmit_hash_policy': '2'}}}, 'software_version': '4.17', 'memSize': '16021', 'cpuSpeed': '2401.000', 'numaNodes': {'0': {'totalMemory': '16021', 'cpus': [0, 1, 2, 3, 4, 5, 6, 7]}}, 'cpuSockets': '1', 'vlans': {}, 'lastClientIface': 'ovirtmgmt', 'cpuCores': '8', 'kvmEnabled': 'true', 'guestOverhead': '65', 'version_name': 'Snow Man', 'cpuThreads': '8', 'emulatedMachines': ['pc-i440fx-rhel7.1.0', 'rhel6.3.0', 'pc-q35-rhel7.0.0', 'rhel6.1.0', 'rhel6.6.0', 'rhel6.2.0', 'pc', 'pc-q35-rhel7.1.0', 'q35', 'rhel6.4.0', 'rhel6.0.0', 'rhel6.5.0', 'pc-i440fx-rhel7.0.0'], 'rngSources': ['random'], 'operatingSystem': {'release': '1.1503.el7.centos.2.8', 'version': '7', 'name': 'RHEL'}}
>>
>> Navigating the Admin web UI offered by the engine, editing (with the "Edit" button or using the corresponding context menu entry) the hosted_engine_1 host, I do not find any way to associate the logical oVirt ovirtmgmt network to the already present ovirtmgmt Linux bridge.
>> Furthermore, the "Network Interfaces" tab of the aforementioned host shows only plain interfaces and bonds (all marked with a down-pointing red arrow, even if they are actually up and running), but not the already defined Linux bridges; inside this tab I find two buttons: "Setup Host Networks" (which would allow me to drag-and-drop-associate the ovirtmgmt logical network to an already present bond, like the right one: bond0, but I avoid it, since I fear it would try to create a bridge from scratch, while it's actually present now and it already has the host address assigned on top, allowing engine-host communication at the moment) and "Sync All Networks" (which actively scares me with a threatening "Are you sure you want to synchronize all host's networks?", which I deny, since it's view is already wrong and it's absolutely not clear in which direction the synchronization would go).
>>
>> So, it seems to me that either I need to perform on the host further pre-configuration steps for the ovirtmgmt bridge (beyond the ifcfg-* setup) or there is a bug in the setup/adminportal (a UI/usability bug, maybe) :)
>>
>> Many thanks again for your help.
>>
>> Kind regards,
>> Giuseppe
>>
>>
>>> When the host will become active you'll can continue with hosted-engine-setup.
>>>
>>>> I seem to recall that a preconfigured network setup on oVirt 3.6 would need something predefined on the libvirt side too (apart from usual ifcfg-* files), but I cannot find the relevant mailing list message anymore nor any other specific documentation.
>>>>
>>>>
Does anyone have any further suggestion or clue (code/docs to read)?
>>>>
>>>>
Many thanks in advance.
>>>>
>>>>
Kind regards,
>>>>
Giuseppe
>>>>
>>>>
PS: please keep also my address in replying because I'm experiencing some problems between Hotmail and oVirt-mailing-list
>>>>
_______________________________________________
>>>>
Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
9 years
Multiple console access
by Budur Nagaraju
HI ,
AM using SPICE console to access the vm console ,how to enable multi user
console access.
Thanks,
Nagaraju
9 years