Proposal to move master to fc25 or rawhide
by Sandro Bonazzola
Hi, since we're in the beginning of the 4.2 cycle I'd like to propose to
switch master from fc24 to fc25 or to rawhide, being fc26 going GA on
2017-06-06.
What do you think?
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 10 months
[ANN] oVirt 4.1.0 First Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Release candidate of oVirt 4.1.0 for testing, as of January 23rd, 2016
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the first release candidate of the 4.1 release series.
4.1.0 brings more than 250 enhancements and more than 700 bugfixes,
including more than 300 high or urgent
severity fixes, on top of oVirt 4.0 series
See the release notes [3] for installation / upgrade instructions and a
list of new features and bugs fixed.
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* Fedora 24 (tech preview)
* oVirt Node 4.1
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Live iso is already available[5]
- oVirt Node NG iso is already available[5]
- Hosted Engine appliance is already available
A release management page including planned schedule is also available[4]
Additional Resources:
* Read more about the oVirt 4.1.0 release highlights:
http://www.ovirt.org/release/4.1.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.0/
[4]
http://www.ovirt.org/develop/release-management/releases/4.1/release-mana...
[5] http://resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
7 years, 10 months
hotplug_nic test has failed
by Gil Shinar
I have found the following exception in VDSM log:
*MainProcess|jsonrpc/0::DEBUG::2017-01-24
10:13:50,303::supervdsm_server::101::SuperVdsm.ServerCallback::(wrapper)
return network_caps with {'bridges': {'ovirtmgmt': {'ipv6autoconf':
True, 'addr': '192.168.201.2', 'dhcpv6': False, 'ipv6addrs': [],
'gateway': '192.168.201.1', 'dhcpv4': True, 'netmask':
'255.255.255.0', 'ipv4defaultroute': True, 'stp': 'off', 'ipv4addrs':
['192.168.201.2/24' <http://192.168.201.2/24'>], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['eth0'], 'opts':
{'multicast_last_member_count': '2', 'hash_elasticity': '4',
'multicast_query_response_interval': '1000', 'group_fwd_mask': '0x0',
'multicast_snooping': '1', 'multicast_startup_query_interval': '3125',
'hello_timer': '15', 'multicast_querier_interval': '25500', 'max_age':
'2000', 'hash_max': '512', 'stp_state': '0',
'topology_change_detected': '0', 'priority': '32768',
'multicast_membership_interval': '26000', 'root_path_cost': '0',
'root_port': '0', 'multicast_querier': '0',
'multicast_startup_query_count': '2', 'nf_call_iptables': '0',
'topology_change': '0', 'hello_time': '200', 'root_id':
'8000.5452c0a8c902', 'bridge_id': '8000.5452c0a8c902',
'topology_change_timer': '0', 'ageing_time': '30000',
'nf_call_ip6tables': '0', 'gc_timer': '20299', 'nf_call_arptables':
'0', 'group_addr': '1:80:c2:0:0:0', 'multicast_last_member_interval':
'100', 'default_pvid': '1', 'multicast_query_interval': '12500',
'tcn_timer': '0', 'multicast_router': '1', 'vlan_filtering': '0',
'forward_delay': '0'}}}, 'bondings': {}, 'nameservers':
['192.168.201.1'], 'nics': {'eth3': {'ipv6autoconf': True, 'addr':
'192.168.202.212', 'ipv6gateway': 'fe80::5054:ff:fe65:57d6', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True, 'netmask':
'255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.202.212/24' <http://192.168.202.212/24'>], 'hwaddr':
'54:52:c0:a8:ca:03', 'speed': 0, 'gateway': '192.168.202.1'}, 'eth2':
{'ipv6autoconf': True, 'addr': '192.168.202.211', 'ipv6gateway': '::',
'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True,
'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.202.211/24' <http://192.168.202.211/24'>], 'hwaddr':
'54:52:c0:a8:ca:02', 'speed': 0, 'gateway': '192.168.202.1'}, 'eth1':
{'ipv6autoconf': True, 'addr': '192.168.200.143', 'ipv6gateway': '::',
'dhcpv6': False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': True,
'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'ipv4addrs':
['192.168.200.143/24' <http://192.168.200.143/24'>], 'hwaddr':
'54:52:c0:a8:c8:02', 'speed': 0, 'gateway': '192.168.200.1'}, 'eth0':
{'ipv6autoconf': False, 'addr': '', 'ipv6gateway': '::', 'dhcpv6':
False, 'ipv6addrs': [], 'mtu': '1500', 'dhcpv4': False, 'netmask': '',
'ipv4defaultroute': False, 'ipv4addrs': [], 'hwaddr':
'54:52:c0:a8:c9:02', 'speed': 0, 'gateway': ''}}, 'supportsIPv6':
True, 'vlans': {}, 'networks': {'ovirtmgmt': {'dhcpv6': False,
'iface': 'ovirtmgmt', 'ipv6autoconf': True, 'addr': '192.168.201.2',
'bridged': True, 'ipv6addrs': [], 'switch': 'legacy', 'gateway':
'192.168.201.1', 'dhcpv4': True, 'netmask': '255.255.255.0',
'ipv4defaultroute': True, 'stp': 'off', 'ipv4addrs':
['192.168.201.2/24' <http://192.168.201.2/24'>], 'mtu': '1500',
'ipv6gateway': '::', 'ports': ['eth0']}}}
ifup/VLAN100_Network::DEBUG::2017-01-24
10:13:53,048::commands::93::root::(execCmd) FAILED: <err> = 'Running
scope as unit 0e9c5843-c89e-499b-9057-1d70ea366504.scope.\n/etc/sysconfig/network-scripts/ifup-eth:
line 297: 16535 Terminated /sbin/dhclient ${DHCLIENTARGS}
${DEVICE}\nCannot find device "VLAN100_Network"\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\nDevice
"VLAN100_Network" does not exist.\nDevice "VLAN100_Network" does not
exist.\nDevice "VLAN100_Network" does not exist.\n'; <rc> = 1
ifup/VLAN100_Network::ERROR::2017-01-24
10:13:53,049::concurrent::189::root::(run) FINISH thread
<Thread(ifup/VLAN100_Network, started daemon 140683650258688)> failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/concurrent.py", line 185, in run
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
line 910, in _exec_ifup
_exec_ifup_by_name(iface.name <http://iface.name>, cgroup)
File "/usr/lib/python2.7/site-packages/vdsm/network/configurators/ifcfg.py",
line 896, in _exec_ifup_by_name
raise ConfigNetworkError(ERR_FAILED_IFUP, out[-1] if out else '')
ConfigNetworkError: (29, 'Determining IPv6 information for
VLAN100_Network... failed.')*
Can someone please assist (logs for engine and vdsm [1])?
[1]
http://jenkins.ovirt.org/job/test-repo_ovirt_experimental_master/4935/art...
Thanks
Gil
7 years, 10 months
Redundant stomp connection for SPM
by Roy Golan
For each SPM host the engine creates a different stomp connection (See
JsonRpcIIrsServer.java) . This also means another client connection to
maintain on the SPM machine.
What is the reason we still have it and what does it guarantee? Please add
any info here prior to opening a bug to remove it for 4.2.
7 years, 10 months
Thread pools and ManageExecutorService
by Roy Golan
Java EE 7 included ManageExecutorService in its spec. Using Wildfly (as
certified EE 7 container) we can use it to replace our own ThreadPoolUtil
implementation and (possibly) Quarz usage.
Managed executor service is a thread pool resource, manage by the container
and can be controlled via JMX or the startup ovirt-egine.xml.in facility.
This means that it could be tweaked at runtime as well. The service has
configured thread factory and queue as well, all described by the xml
In the engine we are using mutli pools, with little to no capabilities of
tweaking them
- ThreadPoolUtil - our general threading facade
- SchedulerThreadPool - the pool we create to pass to quarz
- HostUpdatesCheckerService.java - internal pool
- CommandExecutor - coco's pool
EE gave us a standard way to handle threads, with runtime configuration and
ability to @inject it into the code, this means once again less code with
using something the platform already supplies with real tuninig abilities.
I know #infra has an item quarts, this should be considered as well.
Following this thread, if there is no suitable bug already, I'll open one
for this.
Some code snippets:
@Resource
private ManagedExecutorService mes;
... Future f = mes.submit(() -> doThis())
The configuration in ovirt-engine.xml.in (already there today!):
<managed-executor-services>
<managed-executor-service
name="default"
jndi-name="java:jboss/ee/concurrency/executor/default"
context-service="default"
thread-factory="default"
hung-task-threshold="60000"
core-threads="5"
max-threads="25"
keepalive-time="5000"
queue-length="1000000"
reject-policy="RETRY_ABORT" />
</managed-executor-services>
Please head here for wildfly docs [1] and here [2] to see a simple example
(from one of the members of the EG of EE, recommended blog in general)
[1]
https://docs.jboss.org/author/display/WFLY8/EE+Concurrency+Utilities+Conf...
[2]
http://www.adam-bien.com/roller/abien/entry/injecting_an_executorservice_...
7 years, 10 months