Proposal to move master to fc25 or rawhide
by Sandro Bonazzola
Hi, since we're in the beginning of the 4.2 cycle I'd like to propose to
switch master from fc24 to fc25 or to rawhide, being fc26 going GA on
2017-06-06.
What do you think?
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
8 years, 3 months
hotplug_nic test has failed
by Gil Shinar
I have found the following exception in VDSM log:
*MainProcess|jsonrpc/0::DEBUG::2017-01-24
10:13:50,303::supervdsm_server::101::SuperVdsm.ServerCallback::(wrapper)
return network_caps with {'bridges': {'ovirtmgmt': {'ipv6autoconf':
True, 'addr': '192.168.201.2', 'dhcpv6': False, 'ipv6addrs': [],
'gateway': '192.168.201.1', 'dhcpv4': True, 'netmask':
'255.255.255.0', 'ipv4defaultroute': True, 'stp': 'off', 'ipv4addrs':
['192.168.201.2/24' <http://192.168.201.2/24'>], 'mtu': '1500',
'…
[View More]ipv6gateway': '::', 'ports': ['eth0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '15', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0',
'topology_change_detected': '0', 'priority': '32768',
'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.5452c0a8c902', 'bridge_id': '8000.5452c0a8c902',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '20299', 'nf_call_arptables':
'0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval':
'100', 'default_pvid': '1', 'multicast_query_interval': '12500',
'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0',
'forward_delay': '0'}}}, 'bondings': {}, 'nameservers':
['192.168.201.1'], 'nics': {'eth3': {'ipv6autoconf': True, 'addr':
'192.168.202.212', 'ipv6gateway': 'fe80::5054:ff:fe65:57d6', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True, 'netmask':
'255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.202.212/24' <http://192.168.202.212/24'>], 'hwaddr':
'54:52:c0:a8:ca:03', 'speed': 0, 'gateway': '192.168.202.1'}, 'eth2':
{'ipv6autoconf': True, 'addr': '192.168.202.211', 'ipv6gateway': '::',
'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True,
'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.202.211/24' <http://192.168.202.211/24'>], 'hwaddr':
'54:52:c0:a8:ca:02', 'speed': 0, 'gateway': '192.168.202.1'}, 'eth1':
{'ipv6autoconf': True, 'addr': '192.168.200.143', 'ipv6gateway': '::',
'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True,
'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.200.143/24' <http://192.168.200.143/24'>], 'hwaddr':
'54:52:c0:a8:c8:02', 'speed': 0, 'gateway': '192.168.200.1'}, 'eth0':
{'ipv6autoconf': False, 'addr': '', 'ipv6gateway': '::', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '',
'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr':
'54:52:c0:a8:c9:02', 'speed': 0, 'gateway': ''}}, 'supportsIPv6':
True, 'vlans': {}, 'networks': {'ovirtmgmt': {'dhcpv6': False,
'iface': 'ovirtmgmt', 'ipv6autoconf': True, 'addr': '192.168.201.2',
'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway':
'192.168.201.1', 'dhcpv4': True, 'netmask': '255.255.255.0',
'ipv4defaultroute': True, 'stp': 'off', 'ipv4addrs':
['192.168.201.2/24' <http://192.168.201.2/24'>], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['eth0']}}}
ifup/VLAN100_Network::DEBUG::2017-01-24
10:13:53,048::commands::93::root::(execCmd) FAILED: <err> = 'Running
scope as unit 0e9c5843-c89e-499b-9057-1d70ea366504.scope.\n/etc/sysconfig/network-scripts/ifup-eth:
line 297: 16535 Terminated /sbin/dhclient ${DHCLIENTARGS}
${DEVICE}\nCannot find device "VLAN100_Network"\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\n'; <rc> = 1
ifup/VLAN100_Network::ERROR::2017-01-24
10:13:53,049::concurrent::189::root::(run) FINISH thread
<Thread(ifup/VLAN100_Network, started daemon 140683650258688)> failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 185, in run
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
line 910, in _exec_ifup
_exec_ifup_by_name(iface.name <http://iface.name>, cgroup)
File "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
line 896, in _exec_ifup_by_name
raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
ConfigNetworkError: (29, 'Determining IPv6 information for
VLAN100_Network... failed.')*
Can someone please assist (logs for engine and vdsm [1])?
[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4935/art...
Thanks
Gil
[View Less]
8 years, 3 months
Redundant stomp connection for SPM
by Roy Golan
For each SPM host the engine creates a different stomp connection (See
JsonRpcIIrsServer.java) . This also means another client connection to
maintain on the SPM machine.
What is the reason we still have it and what does it guarantee? Please add
any info here prior to opening a bug to remove it for 4.2.
8 years, 3 months
Thread pools and ManageExecutorService
by Roy Golan
Java EE 7 included ManageExecutorService in its spec. Using Wildfly (as
certified EE 7 container) we can use it to replace our own ThreadPoolUtil
implementation and (possibly) Quarz usage.
Managed executor service is a thread pool resource, manage by the container
and can be controlled via JMX or the startup ovirt-egine.xml.in facility.
This means that it could be tweaked at runtime as well. The service has
configured thread factory and queue as well, all described by the xml
In the engine we …
[View More]are using mutli pools, with little to no capabilities of
tweaking them
- ThreadPoolUtil - our general threading facade
- SchedulerThreadPool - the pool we create to pass to quarz
- HostUpdatesCheckerService.java - internal pool
- CommandExecutor - coco's pool
EE gave us a standard way to handle threads, with runtime configuration and
ability to @inject it into the code, this means once again less code with
using something the platform already supplies with real tuninig abilities.
I know #infra has an item quarts, this should be considered as well.
Following this thread, if there is no suitable bug already, I'll open one
for this.
Some code snippets:
@Resource
private ManagedExecutorService mes;
... Future f = mes.submit(() -> doThis())
The configuration in ovirt-engine.xml.in (already there today!):
<managed-executor-services>
<managed-executor-service
name="default"
jndi-name="java:jboss/ee/concurrency/executor/default"
context-service="default"
thread-factory="default"
hung-task-threshold="60000"
core-threads="5"
max-threads="25"
keepalive-time="5000"
queue-length="1000000"
reject-policy="RETRY_ABORT" />
</managed-executor-services>
Please head here for wildfly docs [1] and here [2] to see a simple example
(from one of the members of the EG of EE, recommended blog in general)
[1]
https://docs.jboss.org/author/display/WFLY8/EE+Concurrency+Utilities+Conf...
[2]
http://www.adam-bien.com/roller/abien/entry/injecting_an_executorservice_...
[View Less]
8 years, 3 months