Re: [ovirt-devel] [ovirt-users] oVirt HA.
by Sven Kieske
On 28/04/15 16:20, Dan Yasny wrote:
> HA does not mean multiple running instances of the same service. It means
> if the service is gone, it will automatically be restored on a working
> server.
That is a pretty narrow definition of HA, which is not shared by
most parts of the community (and the world), leading to much confusion
of users on this very ML.
HA in general means your service downtime gets minimized, today mostly
realized through load balancing and clustering software services.
just restarting a service (in this case ovirt-engine vm) on a different
host is not what todays user expect under the term "HA" imho.
in theory it should be possible as a design goal to make multiple
ovirt-engine share one remote data base (remote database support is
already there).
I think this would be a huge feature for ovirt, but also requires quite
some design and coding to be done.
thus adding the devel list.
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
9 years, 7 months
Annoucing the 1st oVirt infra hackathon - join the fun!
by Eyal Edri
Hi to all dev & infra members of the oVirt project.
As you know (or not :) , the infra team handles many resources and systems that supports the oVirt project.
Some are visible like gerrit or jenkins, and some are in the background like puppet, foreman, data-centers maintenance, monitoring, jenkins job builder and many more..
As an effort to reduce open issues and tickets [1], we decided to do the 1st infra hackathon so we can join forces for a single
concentrated day and try to resolve as many tickets as possible.
We've created a Google sheet for it [2], so anyone from the community can volunteer and help out, even if he's not an infra member!
This is also a great opportunity for new community members that wish to join the team or just get dirty with some "devops" tasks...
Each task has a 'verifier' column which includes all infra members or people from the ovirt project who can assist with code review, help, verification. [3]
If you see a task in [1] and feel like you'd want to take it on, feel free to add it to [2] if it's not there.
An exact date will be published soon, but might be as early as next week, so make sure to write your name on a task if you're interested!
note: infra team availability during the hackathon might be limited, and only urgent issues will be handled.
happy hacking,
oVirt Infra Team
[1] https://fedorahosted.org/ovirt/report/1
[2] http://goo.gl/QtWVJ3
[3] if you think you can help as a reviewer, please add your name in sheet2.
9 years, 7 months
[ACTION NEEDED][QE] oVirt 3.6.0 status
by Sandro Bonazzola
Hi, here's an update on 3.6 status on integration / rel-eng side
The tracker bug for 3.6.0 [1] currently shows no blockers.
Repository closure is currently broken on Fedora 20 and CentOS 6
due to a missing required dependency on recent libvirt and vdsm rpm dropping el6 support.
VDSM builds for EL6 are no more available on master snapshot.
ACTION: jenkins job owners: please review jenkins jobs relying on VDSM to be available on EL6.
There are 535 bugs [2] targeted to 3.6.0.
NEW ASSIGNED POST Total
abrt+infra 2 0 0 1
docs 10 0 0 10
external 1 0 0 1
gluster 30 36 20 86
i18n 2 0 0 2
infra 44 3 8 55
integration 35 2 9 46
network 46 1 8 55
node 26 2 3 31
ovirt-node-plugin-vdsm 1 0 0 1
sla 45 3 2 50
spice 1 0 0 1
storage 73 7 5 85
ux 27 0 3 30
virt 63 6 11 80
Total 406 60 69 535
Features submission is now CLOSED as per current release schedule.
ACTION: review features tracked in the google doc[3]
On Integration side:
* Progress on backup / restore RFEs
* Fixing regressions in Hosted Engine deployment on additional hosts
On Release engineering side:
* Working with infra team fixing jenkins issues
ACTION: community members are welcome to join QE effort[4] by testing nightly master snapshot[5] on test systems
[1] https://bugzilla.redhat.com/1155425
[2] https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_release%3A3.6....
[3] http://goo.gl/9X3G49
[4] http://www.ovirt.org/OVirt_Quality_Assurance
[5] http://www.ovirt.org/Install_nightly_snapshot
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 7 months
VDSM - sampling.py - remove() called without previous add()
by Christopher Pereira
This is a multi-part message in MIME format.
--------------060105040505080104010504
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
In sampling.py, remove() is being called without calling add() before,
which throws:
JsonRpc (StompReactor)::DEBUG::2015-04-28
17:35:55,061::stompReactor::94::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command=u'SEND'>
Thread-37401::DEBUG::2015-04-28
17:35:55,062::__init__::445::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'VM.destroy' in bridge with {u'vmID':
u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'}
JsonRpcServer::DEBUG::2015-04-28
17:35:55,062::__init__::482::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-37401::INFO::2015-04-28
17:35:55,062::API::334::vds::(destroy) vmContainerLock acquired by
vm 6ec9c0a0-2879-4bfe-9a79-92471881ebfe
Thread-37401::DEBUG::2015-04-28
17:35:55,062::vm::3513::vm.Vm::(destroy)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::destroy Called
Thread-37401::INFO::2015-04-28
17:35:55,062::vm::3444::vm.Vm::(releaseVm)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::Release VM resources
Thread-37401::WARNING::2015-04-28
17:35:55,062::vm::375::vm.Vm::(_set_lastStatus)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::trying to set state to
Powering down when already Down
Thread-37401::ERROR::2015-04-28
17:35:55,063::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest)
Internal server error
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 464, in _serveRequest
res = method(**params)
File "/usr/share/vdsm/rpc/Bridge.py", line 273, in _dynamicMethod
result = fn(*methodArgs)
File "/usr/share/vdsm/API.py", line 339, in destroy
res = v.destroy()
File "/usr/share/vdsm/virt/vm.py", line 3515, in destroy
result = self.doDestroy()
File "/usr/share/vdsm/virt/vm.py", line 3533, in doDestroy
return self.releaseVm()
File "/usr/share/vdsm/virt/vm.py", line 3448, in releaseVm
sampling.stats_cache.remove(self.id)
File "/usr/share/vdsm/virt/sampling.py", line 428, in remove
if vmid in self._vm_last_timestamp.keys():
KeyError: u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'
Thread-37401::DEBUG::2015-04-28
17:35:55,063::stompReactor::158::yajsonrpc.StompServer::(send)
Sending response
In file '/usr/share/vdsm/virt/sampling.py':
def add(self, vmid):
"""
Warm up the cache for the given VM.
This is to avoid races during the first sampling and the first
reporting, which may result in a VM wrongly reported as
unresponsive.
"""
with self._lock:
self._vm_last_timestamp[vmid] = self._clock()
def remove(self, vmid):
"""
Remove any data from the cache related to the given VM.
"""
with self._lock:
* if vmid in
self._vm_last_timestamp.keys():**<----------------- I patched here as a
workarround
** del self._vm_last_timestamp[vmid]*
--------------060105040505080104010504
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
In sampling.py, remove() is being called without calling add()
before, which throws:<br>
<blockquote>JsonRpc (StompReactor)::DEBUG::2015-04-28
17:35:55,061::stompReactor::94::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command=u'SEND'><br>
Thread-37401::DEBUG::2015-04-28
17:35:55,062::__init__::445::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'VM.destroy' in bridge with {u'vmID':
u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'}<br>
JsonRpcServer::DEBUG::2015-04-28
17:35:55,062::__init__::482::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request<br>
Thread-37401::INFO::2015-04-28
17:35:55,062::API::334::vds::(destroy) vmContainerLock acquired by
vm 6ec9c0a0-2879-4bfe-9a79-92471881ebfe<br>
Thread-37401::DEBUG::2015-04-28
17:35:55,062::vm::3513::vm.Vm::(destroy)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::destroy Called<br>
Thread-37401::INFO::2015-04-28
17:35:55,062::vm::3444::vm.Vm::(releaseVm)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::Release VM resources<br>
Thread-37401::WARNING::2015-04-28
17:35:55,062::vm::375::vm.Vm::(_set_lastStatus)
vmId=`6ec9c0a0-2879-4bfe-9a79-92471881ebfe`::trying to set state
to Powering down when already Down<br>
Thread-37401::ERROR::2015-04-28
17:35:55,063::__init__::469::jsonrpc.JsonRpcServer::(_serveRequest)
Internal server error<br>
Traceback (most recent call last):<br>
File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py",
line 464, in _serveRequest<br>
res = method(**params)<br>
File "/usr/share/vdsm/rpc/Bridge.py", line 273, in
_dynamicMethod<br>
result = fn(*methodArgs)<br>
File "/usr/share/vdsm/API.py", line 339, in destroy<br>
res = v.destroy()<br>
File "/usr/share/vdsm/virt/vm.py", line 3515, in destroy<br>
result = self.doDestroy()<br>
File "/usr/share/vdsm/virt/vm.py", line 3533, in doDestroy<br>
return self.releaseVm()<br>
File "/usr/share/vdsm/virt/vm.py", line 3448, in releaseVm<br>
sampling.stats_cache.remove(self.id)<br>
File "/usr/share/vdsm/virt/sampling.py", line 428, in remove<br>
if vmid in self._vm_last_timestamp.keys():<br>
KeyError: u'6ec9c0a0-2879-4bfe-9a79-92471881ebfe'<br>
Thread-37401::DEBUG::2015-04-28
17:35:55,063::stompReactor::158::yajsonrpc.StompServer::(send)
Sending response<br>
</blockquote>
In file '/usr/share/vdsm/virt/sampling.py':<br>
<br>
def add(self, vmid):<br>
"""<br>
Warm up the cache for the given VM.<br>
This is to avoid races during the first sampling and the
first<br>
reporting, which may result in a VM wrongly reported as
unresponsive.<br>
"""<br>
with self._lock:<br>
self._vm_last_timestamp[vmid] = self._clock()<br>
<br>
def remove(self, vmid):<br>
"""<br>
Remove any data from the cache related to the given VM.<br>
"""<br>
with self._lock:<br>
<b> if vmid in self._vm_last_timestamp.keys():</b><b>
<----------------- I patched here as a workarround<br>
</b><b> del self._vm_last_timestamp[vmid]</b><br>
<br>
</body>
</html>
--------------060105040505080104010504--
9 years, 7 months
Orphaning ovirt-node in Fedora
by Fabian Deutsch
Hey,
historically we maintained the ovirt-node package in Fedora.
But due to the nature of ovirt-node the package is not directly usable in Fedora.
To prevent an accidental use of the package and because there is no other value in maintaining it there, I plan to orphan this package in Fedora.
Is anybody objecting to this?
If not, then I'll orphan it in one week.
We can always bring it back if it becomes necessary. but I do not see a reason why this should happen.
Greetings
fabian
9 years, 7 months
libvirtError: unsupported configuration: timer hypervclock doesn't support setting of timer tickpolicy
by Christopher Pereira
This is a multi-part message in MIME format.
--------------050805010200030806000304
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
It seems like we need to upgrade libvirt dependencies in master repo:
Thread-78::ERROR::2015-04-29
06:23:04,584::vm::741::vm.Vm::(_startUnderlyingVm)
vmId=`6ec9c0a0-2879-4bfe-
9a79-92471881ebfe`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 689, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/virt/vm.py", line 1800, in _run
self._connection.createXML(domxml, flags),
File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
126, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427,
in createXML
if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self)
libvirtError: unsupported configuration: timer hypervclock doesn't
support setting of timer tickpolicy
To reproduce, start a "Windows 2008 R2 x64" VM (no problem with Windows
2012).
Glad to see that hv_ optimization flags are being supported, since they
boost Windows VM performance using inhouse MS Hypervisor optimizations.
--------------050805010200030806000304
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
It seems like we need to upgrade libvirt dependencies in master
repo:<br>
<blockquote>Thread-78::ERROR::2015-04-29
06:23:04,584::vm::741::vm.Vm::(_startUnderlyingVm)
vmId=`6ec9c0a0-2879-4bfe-<br>
9a79-92471881ebfe`::The vm start process failed<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/virt/vm.py", line 689, in
_startUnderlyingVm<br>
self._run()<br>
File "/usr/share/vdsm/virt/vm.py", line 1800, in _run<br>
self._connection.createXML(domxml, flags),<br>
File
"/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line
126, in wrapper<br>
ret = f(*args, **kwargs)<br>
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3427,
in createXML<br>
if ret is None:raise libvirtError('virDomainCreateXML()
failed', conn=self)<br>
libvirtError: unsupported configuration: timer hypervclock doesn't
support setting of timer tickpolicy<br>
</blockquote>
To reproduce, start a "Windows 2008 R2 x64" VM (no problem with
Windows 2012).<br>
Glad to see that hv_ optimization flags are being supported, since
they boost Windows VM performance using inhouse MS Hypervisor
optimizations.<br>
<br>
</body>
</html>
--------------050805010200030806000304--
9 years, 7 months
Engine Broken - The column name gluster_tuned_profile was not found in this ResultSet
by Christopher Pereira
Hi, something broke Engine's Database in master:
2015-04-28 05:53:15,959 ERROR
[org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean] (MSC
service thread 1-4) [] Failed to initialize backend:
org.jboss.weld.exceptions.WeldException: WELD-000049 Unable to invoke
[method] @PostConstruct private
org.ovirt.engine.core.vdsbroker.ResourceManager.init() on
org.ovirt.engine.core.vdsbroker.ResourceManager@38e3648c
at
org.jboss.weld.bean.AbstractClassBean.defaultPostConstruct(AbstractClassBean.java:518)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.bean.ManagedBean$ManagedBeanInjectionTarget.postConstruct(ManagedBean.java:174)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:291)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.context.AbstractContext.get(AbstractContext.java:107)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:616)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:643)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.ovirt.engine.core.di.Injector.instanceOf(Injector.java:73)
[vdsbroker.jar:]
at org.ovirt.engine.core.di.Injector.get(Injector.java:58)
[vdsbroker.jar:]
at
org.ovirt.engine.core.bll.InitBackendServicesOnStartupBean.create(InitBackendServicesOnStartupBean.java:75)
[bll.jar:]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_79]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_79]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_79]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_79]
at
org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptorFactory$ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptorFactory.java:130)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:73)
[jboss-as-weld-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.ManagedReferenceInterceptorFactory$ManagedReferenceInterceptor.processInvocation(ManagedReferenceInterceptorFactory.java:95)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:228)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:333)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.tx.SingletonLifecycleCMTTxInterceptor.processInvocation(SingletonLifecycleCMTTxInterceptor.java:56)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.TCCLInterceptor.processInvocation(TCCLInterceptor.java:45)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:288)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
[jboss-invocation-1.1.1.Final.jar:1.1.1.Final]
at
org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:85)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:116)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:130)
[jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.as.ee.component.ComponentStartService.start(ComponentStartService.java:44)
[jboss-as-ee-7.1.1.Final.jar:7.1.1.Final]
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1811)
at
org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1746)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
[rt.jar:1.7.0_79]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
[rt.jar:1.7.0_79]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_79]
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[rt.jar:1.7.0_79]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[rt.jar:1.7.0_79]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_79]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_79]
at
org.jboss.weld.util.reflection.SecureReflections$13.work(SecureReflections.java:264)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.util.reflection.SecureReflectionAccess.run(SecureReflectionAccess.java:52)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.util.reflection.SecureReflectionAccess.runAsInvocation(SecureReflectionAccess.java:137)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.util.reflection.SecureReflections.invoke(SecureReflections.java:260)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.introspector.jlr.WeldMethodImpl.invoke(WeldMethodImpl.java:174)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
at
org.jboss.weld.bean.AbstractClassBean.defaultPostConstruct(AbstractClassBean.java:516)
[weld-core-1.1.5.AS71.Final.jar:2012-02-10 15:31]
... 43 more
Caused by: org.springframework.jdbc.BadSqlGrammarException:
PreparedStatementCallback; bad SQL grammar [select * from
getvdsgroupbyvdsgroupid(?, ?, ?)]; nested exception is
org.postgresql.util.PSQLException: The column name gluster_tuned_profile
was not found in this ResultSet.
at
org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:98)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:72)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:80)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:603) [spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:637)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:666)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:706)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.executeCallInternal(PostgresDbEngineDialect.java:154)
[dal.jar:]
at
org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.doExecute(PostgresDbEngineDialect.java:120)
[dal.jar:]
at
org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:181)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:147)
[dal.jar:]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:109)
[dal.jar:]
at
org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeRead(SimpleJdbcCallsHandler.java:101)
[dal.jar:]
at
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl.get(VdsGroupDAODbFacadeImpl.java:52)
[dal.jar:]
at
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl.get(VdsGroupDAODbFacadeImpl.java:44)
[dal.jar:]
at
org.ovirt.engine.core.vdsbroker.MonitoringStrategyFactory.getMonitoringStrategyForVds(MonitoringStrategyFactory.java:30)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.VdsManager.<init>(VdsManager.java:96)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.AddVds(ResourceManager.java:229)
[vdsbroker.jar:]
at
org.ovirt.engine.core.vdsbroker.ResourceManager.init(ResourceManager.java:160)
[vdsbroker.jar:]
... 53 more
Caused by: org.postgresql.util.PSQLException: The column name
gluster_tuned_profile was not found in this ResultSet.
at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.findColumn(AbstractJdbc2ResultSet.java:2542)
at
org.postgresql.jdbc2.AbstractJdbc2ResultSet.getString(AbstractJdbc2ResultSet.java:2385)
at
org.jboss.jca.adapters.jdbc.WrappedResultSet.getString(WrappedResultSet.java:1381)
at
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl$VdsGroupRowMapper.mapRow(VdsGroupDAODbFacadeImpl.java:306)
[dal.jar:]
at
org.ovirt.engine.core.dao.VdsGroupDAODbFacadeImpl$VdsGroupRowMapper.mapRow(VdsGroupDAODbFacadeImpl.java:256)
[dal.jar:]
at
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:92)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:1)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:649)
[spring-jdbc.jar:3.1.1.RELEASE]
at
org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:587) [spring-jdbc.jar:3.1.1.RELEASE]
... 68 more
Any hint?
9 years, 7 months
[ANN] oVirt 3.5.2 Final Release is now available
by Sandro Bonazzola
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
The oVirt team is pleased to announce that the oVirt 3.5.2 Final Release is now available as of April 28th 2015.
oVirt is an open source alternative to VMware vSphere, and provides an excellent KVM management interface for multi-node virtualization.
oVirt is available now for Fedora 20,
Red Hat Enterprise Linux 6.6, CentOS 6.6 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS 7.1 (or similar).
This release of oVirt includes numerous bug fixes. See the release notes [1] for a list of the new features and bugs fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
A new oVirt Live and oVirt Node ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.5.2_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
- --
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJVP0l9AAoJEHw3u57E4QAOQqQP/1xFK0hv63XnK3PuY4nvgHZI
8zYx4najqu/HraI1T8CNbNdA9gRsC41Wi8Rj6NmYYjYQYli2aVRCvCJQ2ZHMJ/4C
Li4PjzRN/I5jUVXTnXXFZwBR5puormC6Xw5/0BWpZl1HY9kQfp0BQAN2xZRpPOAL
d05YWwkhGRlY2QqbejD2/Srs7EeaBm6a72eso87VlF0cPqmOyEQHOsZW/+ePUr8m
IsEvmiLs4AjVddcThAMwnpxoTyVuvAcKcNNuimnkOLqnVn0cvEUJQuBSzeasPU9u
lMJMiavT3F1NgcO8nuxl+ouxDiPPD2k3sJYWishr4kSjlAh+nHD9AiUBvwKG6wtz
MsL4CEURu+ufk2IRNsiVwheS+V7NDxePZ8FiDiv8I8Fi3cJNSTs8LL9sjGZnW/uz
p1CfTXr21965yManVDXqhHOU5Z5v5cp9ZLbYMU4HkBDyF9r5U/fGt5wYzaTtHIQ9
9JsPoEmWzSWGBMPeouMEpWNKJJM0RZ1DpD11qoksjTQ/nIiMP8doBlvkvPBs9ZS6
WEwgLMxa/nGm5/nO/30VDKgHTVa/bFJnxe98D0hosFs3nyFGYvj1jT707b1d0srk
aHd0WYv0+7tsJ/y4QJPKoesL61zmWmdrK2URNYiyIWGgNwQJJZuYVY1ckayx63kb
DMpHcfZNlBNrss7FF5lr
=A4ZO
-----END PGP SIGNATURE-----
9 years, 7 months