500 - Internal Server Error after engine-setup oVirt 4.4 on CentOS Stream 8
by Alessio B.
Dear community,
after a fresh installation of CentOS Stream 8 and oVirt 4.4 the web portal send a 500 error.
This is the history:
3 yum check-update
4 dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
5 dnf module -y enable javapackages-tools
6 dnf module -y enable pki-deps
7 dnf module -y enable postgresql:12
8 dnf module -y enable mod_auth_openidc:2.3
9 dnf distro-sync --nobest
10 reboot
11 yum install langpacks-en.noarch
12 localectl set-locale LANG=en_US.UTF-8
13 dnf upgrade --nobest
14 dnf install ovirt-engine
15 engine-setup
16 systemctl restart ovirt-engine
This is the error in /var/log/ovirt-engine/server.log after the engine-setup
2022-05-16 12:33:42,931+02 ERROR [org.jboss.msc.service.fail] (ServerService Thread Pool -- 56) MSC000001: Failed to start service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: org.jboss.msc.service.StartException in service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:57)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.jboss.threads@2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at org.jboss.threads@2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:163)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:134)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:127)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:141)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
... 8 more
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@2d479b8
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:239)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:446)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
... 13 more
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@2d479b8
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:85)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:66)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:174)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.contexts.AbstractContext.get(AbstractContext.java:96)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstanceStrategy$ApplicationScopedContextualInstanceStrategy.get(ContextualInstanceStrategy.java:140)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:694)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:794)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.util.Beans.injectBoundFields(Beans.java:336)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:347)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultInjector$1.proceed(DefaultInjector.java:71)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.InjectionContextImpl.run(InjectionContextImpl.java:48)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultInjector.inject(DefaultInjector.java:73)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.module.ejb.DynamicInjectionPointInjector.inject(DynamicInjectionPointInjector.java:61)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.module.ejb.SessionBeanInjectionTarget.inject(SessionBeanInjectionTarget.java:138)
at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionContext.inject(WeldInjectionContext.java:39)
at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:51)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentInstantiatorInterceptor.processInvocation(ComponentInstantiatorInterceptor.java:74)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.interceptors.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:111)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:232)
... 28 more
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:83)
... 59 more
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.processProcedureColumns(GenericCallMetaDataProvider.java:362)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.initializeWithProcedureColumnMetaData(GenericCallMetaDataProvider.java:114)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.lambda$createMetaDataProvider$0(CallMetaDataProviderFactory.java:127)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
... 64 more
Thank you very much for the help!
2 years, 6 months
upgrade python3
by ariel.fridman@ibm.com
Hello
on ovirt manage engine, based on CentOS stream 8:
python3 3.6.8 is end of life. this is a security vulnerability,
yum update do not find the new 3.10 ver.
please advise how to upgrade it manually.
thanks Ariel
2 years, 6 months
Deployment suddenly fails at engine check
by Harry O
Hi,
After the new update, my deployment fails at engine check.
What can I do to debug?
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is up]
[ ERROR ] fatal: [localhost -> 192.168.222.12]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content": "<html><head><title>Error</title></head><body>500 - Internal Server Error</body></html>", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8", "date": "Fri, 22 Apr 2022 16:02:04 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1 mod_wsgi/4.6.4 Python/3.6", "status": 500, "url": "http://localhost/ovirt-engine/services/health"}
[ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.12]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
2 years, 6 months
Unable to import ovirt vm ova into aws?
by rickey john
I am trying to import a ubuntu 18 os ovirt vm ova template.
For this i am creating task with below command aws ec2 import-image --region ap-south-1 --description "Ovirt VM" --license-type BYOL --disk-containers "file://containers.json" aws ec2 describe-import-image-tasks --region ap-south-1 --import-task-ids import-ami-0755c8cd52d08ac88
But unfortunately it is failing with "StatusMessage": "ClientError: No valid partitions. Not a valid volume." error.
can someone please guide the steps to export and import ovirt vm ova into aws ec2 instance?
2 years, 6 months
Upload Cerfiticate issue
by louisb@ameritech.net
I’m trying to upload an ISO image in ovirt 4.4.10, It’s been a huge challenge to accomplish this. I read several post regarding this issue, I really don’t have s clear understanding of solution to this issue. My experience has not been very fruitful at all.
When I try to perform the upload using the web GUI I get the following message in the status column: “Paused by System“. I’ve been reading for roughly three weeks trying to understand and resolve the issue. There is a tremendous amount of discussion centered around changing certificate file located in the directory “etcpki/ovirt-engine”, however it not clear at all what files need to change.
My installation is an out-of-box installation with any certificates beginning generated as part of the install process, I’ve imported the certificate that was generated into my browser/Firefox 91.9.0. Based on what I’ve been reading the solution to my problems is that the certificate does not match the certificate defined in the “imageio-service”, my question is why because it was generated as part of the installation?
What files in the “/etc/pki/ovirt-engine” must be changed to get things working. Further should or do I copy the certificate saved from the GUI to files under “/etc/pki/ovirt-engine” directory?
I feel like I’m so close after six month of reading and re-installs, what do I do next?
Thanks
2 years, 6 months
[IMPORTANT] Upgrade to postgresql-jdbc-42.2.14-1 breaks oVirt Engine 4.4/4.5
by Martin Perina
Hi,
Unfortunately we have just found that latest release of
postgresql-jdbc-42.2.14-1 breaks existing oVirt Engine 4.4 and 4.5
installations running on CentOS Stream.
The workaround is to downgrade to previous version, for example
postgresql-jdbc-42.2.3-3 should work fine.
Here are detailed instructions:
1. If you have already upgraded to postgresql-jdbc-42.2.14-1, please
downgrade to previous version:
$ dnf downgrade postgresql-jdbc
$ systemctl restart ovirt-engine
2. If you are going to upgrade your oVirt Engine machine, please exclude
postgresql-jdbc package from upgrades:
$ dnf update -x postgresql-jdbc
We have created https://bugzilla.redhat.com/2077794 to track this issue,
but unfortunately we don't have a fix yet.
Regards,
Martin
--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.
2 years, 6 months
oVirt Node 4.5.0.2 Async update
by Sandro Bonazzola
oVirt Node 4.5.0.2 Async update
On May 13th 2022 the oVirt project released an async update of oVirt Node
(4.5.0.2) delivering important impact security fixes, several bug fixes and
enhancements.
The update is already available on resources.ovirt.org and should land on
oVirt mirrors within 24 hours.
Security fixes included in oVirt Node NG 4.5.0.2 Async compared to oVirt
4.5.0.1 Async:
-
CVE-2022-1271 <https://bugzilla.redhat.com/show_bug.cgi?id=2073310> -
important - gzip: arbitrary-file-write vulnerability
oVirt Node has been updated, including:
-
CentOS Stream 8 latest updates
-
Full list of changes compared to oVirt Node 4.5.0.1:
ovirt-node-ng-image-4.5.0.1
ovirt-node-ng-image-4.5.0.2
NetworkManager 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-config-server 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-libnm 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-ovs 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-team 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-tui 1.39.0-1.el8
1.39.2-2.el8
centos-release-ovirt45 8.6-4.el8s
8.7-1.el8s
fence-agents-all 4.2.1-89.el8
4.2.1-93.el8
fence-agents-amt-ws 4.2.1-89.el8
4.2.1-93.el8
fence-agents-apc 4.2.1-89.el8
4.2.1-93.el8
fence-agents-apc-snmp 4.2.1-89.el8
4.2.1-93.el8
fence-agents-bladecenter 4.2.1-89.el8
4.2.1-93.el8
fence-agents-brocade 4.2.1-89.el8
4.2.1-93.el8
fence-agents-cisco-mds 4.2.1-89.el8
4.2.1-93.el8
fence-agents-cisco-ucs 4.2.1-89.el8
4.2.1-93.el8
fence-agents-common 4.2.1-89.el8
4.2.1-93.el8
fence-agents-compute 4.2.1-89.el8
4.2.1-93.el8
fence-agents-drac5 4.2.1-89.el8
4.2.1-93.el8
fence-agents-eaton-snmp 4.2.1-89.el8
4.2.1-93.el8
fence-agents-emerson 4.2.1-89.el8
4.2.1-93.el8
fence-agents-eps 4.2.1-89.el8
4.2.1-93.el8
fence-agents-heuristics-ping 4.2.1-89.el8
4.2.1-93.el8
fence-agents-hpblade 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ibmblade 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ifmib 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ilo-moonshot 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ilo-mp 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ilo-ssh 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ilo2 4.2.1-89.el8
4.2.1-93.el8
fence-agents-intelmodular 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ipdu 4.2.1-89.el8
4.2.1-93.el8
fence-agents-ipmilan 4.2.1-89.el8
4.2.1-93.el8
fence-agents-kdump 4.2.1-89.el8
4.2.1-93.el8
fence-agents-mpath 4.2.1-89.el8
4.2.1-93.el8
fence-agents-redfish 4.2.1-89.el8
4.2.1-93.el8
fence-agents-rhevm 4.2.1-89.el8
4.2.1-93.el8
fence-agents-rsa 4.2.1-89.el8
4.2.1-93.el8
fence-agents-rsb 4.2.1-89.el8
4.2.1-93.el8
fence-agents-sbd 4.2.1-89.el8
4.2.1-93.el8
fence-agents-scsi 4.2.1-89.el8
4.2.1-93.el8
fence-agents-vmware-rest 4.2.1-89.el8
4.2.1-93.el8
fence-agents-vmware-soap 4.2.1-89.el8
4.2.1-93.el8
fence-agents-wti 4.2.1-89.el8
4.2.1-93.el8
gdisk 1.0.3-9.el8
1.0.3-11.el8
glib2 2.56.4-158.el8
2.56.4-159.el8
glibc 2.28-197.el8
2.28-199.el8
glibc-common 2.28-197.el8
2.28-199.el8
glibc-langpack-en 2.28-197.el8
2.28-199.el8
gluster-ansible-cluster 1.0-4.el8
1.0-5.el8
gluster-ansible-features 1.0.5-12.el8
1.0.5-13.el8
gluster-ansible-infra 1.0.4-20.el8
1.0.4-21.el8
gluster-ansible-roles 1.0.5-26.el8
1.0.5-27.el8
gzip 1.9-12.el8
1.9-13.el8
libgcc 8.5.0-12.el8
8.5.0-13.el8
libgomp 8.5.0-12.el8
8.5.0-13.el8
libguestfs 1.44.0-5.module_el8.6.0+1087+b42c8331
1.44.0-6.module_el8.7.0+1140+ff0772f9
libguestfs-appliance 1.44.0-5.module_el8.6.0+1087+b42c8331
1.44.0-6.module_el8.7.0+1140+ff0772f9
libguestfs-tools-c 1.44.0-5.module_el8.6.0+1087+b42c8331
1.44.0-6.module_el8.7.0+1140+ff0772f9
libstdc++ 8.5.0-12.el8
8.5.0-13.el8
libvirt 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-client 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-config-network 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-config-nwfilter 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
Libvirt-daemon-driver-interface 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-network 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-nodedev 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-nwfilter 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-qemu 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-secret 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-core 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-disk 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-gluster 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-iscsi 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
Libvirt-daemon-driver-storage-iscsi-direct
8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-logical 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-mpath 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-rbd 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-driver-storage-scsi 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-daemon-kvm 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-libs 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libvirt-lock-sanlock 8.0.0-2.module_el8.6.0+1087+b42c8331
8.0.0-6.module_el8.7.0+1140+ff0772f9
libwbclient 4.15.5-5.el8
4.15.5-8.el8
mokutil 0.3.0-11.el8
0.3.0-12.el8
ovirt-node-ng-image-update-placeholder 4.5.0.1-1.el8
4.5.0.2-1.el8
ovirt-release-host-node 4.5.0.1-1.el8
4.5.0.2-1.el8
python3-dns 1.15.0-10.el8
1.15.0-11.el8
python3-slip 0.6.4-11.el8
0.6.4-13.el8
python3-slip-dbus 0.6.4-11.el8
0.6.4-13.el8
qemu-guest-agent 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-img 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-block-curl 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-block-gluster 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-block-iscsi 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-block-rbd 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-block-ssh 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-common 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-core 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-docs 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-hw-usbredir 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-ui-opengl 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-kvm-ui-spice 6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
samba-client-libs 4.15.5-5.el8
4.15.5-8.el8
samba-common 4.15.5-5.el8
4.15.5-8.el8
samba-common-libs 4.15.5-5.el8
4.15.5-8.el8
seabios-bin 1.15.0-1.module_el8.6.0+1087+b42c8331
1.16.0-1.module_el8.7.0+1140+ff0772f9
seavgabios-bin 1.15.0-1.module_el8.6.0+1087+b42c8331
1.16.0-1.module_el8.7.0+1140+ff0772f9
selinux-policy 3.14.3-96.el8
3.14.3-97.el8
selinux-policy-targeted 3.14.3-96.el8
3.14.3-97.el8
supermin 5.2.1-1.module_el8.6.0+983+a7505f3f
5.2.1-2.module_el8.7.0+1140+ff0772f9
virt-v2v 1.42.0-18.module_el8.6.0+1046+bd8eec5e
1.42.0-19.module_el8.7.0+1140+ff0772f9
xmlrpc-c 1.51.0-5.el8
1.51.0-8.el8
xmlrpc-c-client 1.51.0-5.el8
1.51.0-8.el8
yajl 2.1.0-10.el8
2.1.0-11.el8
Additional resources:
-
Read more about the oVirt 4.5.0 release highlights:
https://www.ovirt.org/release/4.5.0/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 6 months
Problem patching & upgrading a RHEL oVirt host
by David White
Hello,I followed some instructions I found in https://www.ovirt.org/documentation/upgrade_guide/ and https://www.ovi... doing the following:
883 subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms 884 subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms 885 subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms 886 rpm -i --justdb --nodeps --force "http://mirror.centos.org/centos/8-stream/BaseOS/$(rpm --eval '%_arch')/os/Packages/centos-stream-release-8.6-1.el8.noarch.rpm" 887 cat >/etc/yum.repos.d/CentOS-Stream-Extras.repo <<'EOF' 888 [cs8-extras] 889 name=CentOS Stream $releasever - Extras 890 mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=extras&infra=$infra 891 #baseurl=http://mirror.centos.org/$contentdir/8-stream/extras/$basearch/os/ 892 gpgcheck=1 893 enabled=1 894 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-Official 895 EOF 896 cat >/etc/yum.repos.d/CentOS-Stream-Extras-common.repo <<'EOF' 897 [cs8-extras-common] 898 name=CentOS Stream $releasever - Extras common packages 899 mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=extras-extras-common 900 #baseurl=http://mirror.centos.org/$contentdir/8-stream/extras/$basearch/extras-common/ 901 gpgcheck=1 902 enabled=1 903 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Extras 904 EOF 905 echo "8-stream" > /etc/yum/vars/stream 906 dnf distro-sync --nobest 907 reboot 908 dnf install centos-release-ovirt45 909 dnf install centos-release-ovirt45 --enablerepo=extras
But now, yum update isn't working because its trying to install centos-stream-release 8.6.1 over redhat-release-8.6.
Surely I shouldn't install CentOS stream release over RHEL release, should I?
See below:
[root@phys1 dwhite]# cat /etc/redhat-releaseRed Hat Enterprise Linux release 8.5 (Ootpa)[root@phys1 dwhite]# yum updateUpdating Subscription Management repositories.Last metadata expiration check: 0:02:01 ago on Thu 12 May 2022 05:59:38 AM EDT.Error: Problem: installed package centos-stream-release-8.6-1.el8.noarch obsoletes redhat-release < 9 provided by redhat-release-8.6-0.1.el8.x86_64 - cannot install the best update candidate for package redhat-release-8.5-0.8.el8.x86_64 - problem with installed package centos-stream-release-8.6-1.el8.noarch(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Sent with ProtonMail secure email.
2 years, 6 months
Host cannot connect to storage domains
by suporte@logicworks.pt
After upgrade to 4.5 host cannot be activated because cannot connect to data domain.
I have a data domain in NFS (master) and a GlusterFS. It complains about the Gluster domain:
The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
# rpm -qa|grep glusterfs*
glusterfs-10.1-1.el8s.x86_64
glusterfs-selinux-2.0.1-1.el8s.noarch
glusterfs-client-xlators-10.1-1.el8s.x86_64
glusterfs-events-10.1-1.el8s.x86_64
libglusterfs0-10.1-1.el8s.x86_64
glusterfs-fuse-10.1-1.el8s.x86_64
glusterfs-server-10.1-1.el8s.x86_64
glusterfs-cli-10.1-1.el8s.x86_64
glusterfs-geo-replication-10.1-1.el8s.x86_64
engine log:
2022-04-27 13:35:16,118+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to connect Host NODE1 to the Storage Domains DATA1.
2022-04-27 13:35:16,169+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: STORAGE_DOMAIN_ ERR OR(996), The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
2022-04-27 13:35:16,170+01 ERR OR [org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [ebe79c6
] The connection with details 'node1-teste.acloud.pt:/data1' failed because of error code '4106' and error message is: xml error
vdsm log:
2022-04-27 13:40:07,125+0100 ERROR (jsonrpc/4) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\
n <volume>\n <name>data1</name>\n <id>d7eb2c38-2707-4774-9873-a7303d024669</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <sn
apshotCount>0</snapshotCount>\n <brickCount>2</brickCount>\n <distCount>2</distCount>\n <replicaCount>1</replicaCount>\n <arbiterCount>0</arbiterCount>
\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>0</type>\n <typeStr>Distribute</typeStr>\n <transport>0</tran
sport>\n <bricks>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/home/brick1<name>node1-teste.acloud.pt:/home/brick1</name><hostUuid>0
8c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/brick2<name>nod
e1-teste.acloud.pt:/brick2</name><hostUuid>08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>23</optCount>\n
<options>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>transport.addre
ss-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <name>storag
e.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>cluster.min-free-disk</name>\n <value>5%</value>\n
</option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>perfor
mance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n
</option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <
name>network.remote-dio</name>\n <value>enable</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable<
/value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>auto</value>\n </option>\n <option>\n
<name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n
<value>full</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>
\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>features.shar
d</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n
<option>\n <name>cluster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>client.event-threads</name>\
n <value>4</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n
<option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\
n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': 'dede3145-651a-4b01-b8d2-82bff8670696', 'status': 4106}]} from=
::ffff:192.168.5.165,42132, flow_id=4c170005, task_id=cec6f36f-46a4-462c-9d0a-feb8d814b465 (api:54)
2022-04-27 13:40:07,410+0100 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,411+0100 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:07,785+0100 INFO (jsonrpc/7) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:54)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:54)
2022-04-27 13:40:07,802+0100 INFO (jsonrpc/7) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,37040 (api:48)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,37040 (api:54)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:48)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:54)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:22,805+0100 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:54)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:54)
2022-04-27 13:40:22,822+0100 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
--
Jose Ferradeira
http://www.logicworks.pt
2 years, 6 months