Upgrade CentOS 7 oVirt host to CentOS 8 Stream from backup file - Python error
by Matyi Szabolcs
Hi Support,
The following is the problem.
I am trying to upgrade a CentOS 7 oVirt host to CentOS 8 Stream with oVirt-engine backup.
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
I perform the reset as follows:
hosted-engine --deploy --restore-from-file=/path/to/backup
When ansible-playbook is almost finished it gives the following error:
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Initialize lockspace volume]
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 5, "changed": true, "cmd": ["hosted-engine", "--reinitialize-lockspace", "--force"], "delta": "0:00:00.227640", "end": "2022-02-08 08:22:54.615
461", "msg": "non-zero return code", "rc": 1, "start": "2022-02-08 08:22:54.387821", "stderr": "Traceback (most recent call last):\n File \"/usr/lib64/python3.6/runpy.py\", line 193, in _run_modul
e_as_main\n \"__main__\", mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_s
etup/reinitialize_lockspace.py\", line 30, in <module>\n ha_cli.reset_lockspace(force)\n File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lo
ckspace\n stats = broker.get_stats_from_storage()\n File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py\", line 148, in get_stats_from_storage\n result = self._p
roxy.get_stats()\n File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1112, in __call__\n return self.__send(self.__name, args)\n File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1452, i
n __request\n verbose=self.__verbose\n File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1154, in request\n return self.single_request(host, handler, request_body, verbose)\n File \"/usr
/lib64/python3.6/xmlrpc/client.py\", line 1166, in single_request\n http_conn = self.send_request(host, handler, request_body, verbose)\n File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 12
79, in send_request\n self.send_content(connection, request_body)\n File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1309, in send_content\n connection.endheaders(request_body)\n File \
"/usr/lib64/python3.6/http/client.py\", line 1268, in endheaders\n self._send_output(message_body, encode_chunked=encode_chunked)\n File \"/usr/lib64/python3.6/http/client.py\", line 1044, in _
send_output\n self.send(msg)\n File \"/usr/lib64/python3.6/http/client.py\", line 982, in send\n self.connect()\n File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.
py\", line 76, in connect\n self.sock.connect(base64.b16decode(self.host))\nFileNotFoundError: [Errno 2] No such file or directory", "stderr_lines": ["Traceback (most recent call last):", " Fil
e \"/usr/lib64/python3.6/runpy.py\", line 193, in _run_module_as_main", " \"__main__\", mod_spec)", " File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code", " exec(code, run_globals
)", " File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/reinitialize_lockspace.py\", line 30, in <module>", " ha_cli.reset_lockspace(force)", " File \"/usr/lib/python3.6/site-p
ackages/ovirt_hosted_engine_ha/client/client.py\", line 286, in reset_lockspace", " stats = broker.get_stats_from_storage()", " File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/li
b/brokerlink.py\", line 148, in get_stats_from_storage", " result = self._proxy.get_stats()", " File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1112, in __call__", " return self.__send(
self.__name, args)", " File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1452, in __request", " verbose=self.__verbose", " File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1154, in requ
est", " return self.single_request(host, handler, request_body, verbose)", " File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1166, in single_request", " http_conn = self.send_request(ho
st, handler, request_body, verbose)", " File \"/usr/lib64/python3.6/xmlrpc/client.py\", line 1279, in send_request", " self.send_content(connection, request_body)", " File \"/usr/lib64/python3
.6/xmlrpc/client.py\", line 1309, in send_content", " connection.endheaders(request_body)", " File \"/usr/lib64/python3.6/http/client.py\", line 1268, in endheaders", " self._send_output(mes
sage_body, encode_chunked=encode_chunked)", " File \"/usr/lib64/python3.6/http/client.py\", line 1044, in _send_output", " self.send(msg)", " File \"/usr/lib64/python3.6/http/client.py\", line
982, in send", " self.connect()", " File \"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py\", line 76, in connect", " self.sock.connect(base64.b16decode(self.host))",
"FileNotFoundError: [Errno 2] No such file or directory"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook
[ INFO ] Stage: Clean up
What is the problem?
Thanks!
3 years, 2 months
Deploy ovirt-csi in the kubernetes cluster
by ssarang520@gmail.com
Hi,
I want to deploy ovirt-csi in the kubernetes cluster. But the guide only has how to deploy to openshift.
How can I deploy the ovirt-csi in the kubernetes cluster? Is there any way to do that?
3 years, 2 months
Fibre Channel storage issue
by Timothy J. Wielgos
Thanks everyone for helping me through the last issue I had. Once I changed bonding mode to 2, everything worked.
However, now I have another issue.
I'm trying to install this using fibre channel storage. I set up a new lun from my storage array, assigned it to my host, and configured multipathing. All looks good. Then, when I ran the install script, I got the following error:
[ ERROR ] ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Physical device initialization failed. Please check that the device is empty and accessible by the host.]". HTTP response code is 400.[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Physical device initialization failed. Please check that the device is empty and accessible by the host.]\". HTTP response code is 400."}
Google says that's because the lun is dirty - maybe reused. I tried doing the suggested - I used dd to write all zeroes to the lun. I used wipefs on the lun. I deleted the lun from the array and created a new one. No matter what I did with that lun, it's still showing up with this error.
I then created a second LUN. I mapped it to the host, and attempted to use that lun for storage - and it worked!
There must be a dirty config somewhere on this thing that I need to clean up from that first lun.
Anybody know what I might have to clean up on this host to clear out that old config?
3 years, 2 months
OVIRT-Node Engine Deployment Failure due to no URLs in mirrorlist
by aellahib@gmail.com
I am running OVIRT-Node and although I updated my mirror list and the hosts are fine. It seems my engine deployment (deploying hyperconverged gluster) is trying to use a different set of mirrors during Engine VM Prep.
SO this is the error I see, should I be running a newer RPM on these nodes or something, I am not aware if there is a newer one I should be using for this.
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.222.193]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-gluster8': Cannot prepare internal mirrorlist: No URLs in mirrorlist", "rc": 1, "results": []}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.193]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
It does let me click PrepareVM despite Deployment Failed -- but at that point /var/tmp is filled to more than 67% so it fails due to no space for engine install despite the good minimum requirements set lol
3 years, 2 months
What replaces ISO domains how to re-use ISO files on multiple Data Centers?
by Brian Levinsen
Hello.
I have been searching but not been able to find a solution.
As far as I could find the Storage Domain Type ISO is deprecated.
We have used this for hosting one central location for ISO images.
Everywhere I found they say to use a Data Domain.
But a Data Domain can not be attached to multiple Data Centers?
So how do we in newer versions of ovirt share ISO files with multiple Data Centers?
/Brian
3 years, 2 months
Alternative Perspective - Re: oVirt alternatives
by David White
At the risk of sounding like a Red Hat or IBM fanboy, I have decided to give Red Hat the benefit of the doubt here, and to not make any decisions about switching off of oVirt until and unless an official announcement is made.
In the meantime, I know that I need to move off of Gluster (and I made that decision before the Gluster announcement), and I would need storage with any other solution anyway, so that's where I'm going to focus my own efforts.
In the meantime, while I realize that the optics of a company like IBM / Red Hat shutting a project like oVirt down looks bad to the FOSS community, I'm going to push back a little bit. We have had access to a FOSS application that obviously works for a lot of people. No company is required to provide their services for free, and likewise, I'm of the opinion that one needs to be willing to pay (or contribute in some way) for a quality product service. It reminds me of the mantra: "Fast, Cheap, Free - pick two".
So here's an alternative perspective: What can the community contribute and do in order to keep the project going? Anyone could fork it, rebrand it, and run with it.
I claim to be a software developer, and the uplink in my datacenter is only 100mbps right now (of course I can increase it when needed), so I doubt I could provide much value in terms of hosting or coding.
But I do know security. I'm a Linux systems engineer with over 10 years of experience. I know website content management systems. And people have told me that I'm good at documentation. So I think I have a lot of skill sets that I could "offer" (albeit I don't have much time, and as we all know, time is money. I've been dealing with a serious personal matter since beginning of December, and I'm effectively an acting single parent at the moment).
I'll end this the way I started: I'm going to wait to see what happens before I personally make any decisions to change my entire underlying virtualization infrastructure. In the meantime, I'll continue to work on what I can control - the underlying storage. And if oVirt does shutdown in the future, I'd love to have a conversation with anyone interested in helping out to fork the project and keep it running.
Sent with ProtonMail Secure Email.
3 years, 2 months
Importing export storage domain after redeploy failing, stating export domain still connected.
by Gilboa Davara
Hello all,
I'm rebuilding one of my gluster clusters after it blew up following an
unfortunate expired certificate issue.
After I finally remembered to downgrade qemu (grrrr...) and started
importing the hold gluster storage domains, one of the export domains
failed to import due to "connected to another domain" issue.
How can I force detach it from the previous cluster?
(I remember something about deleting the lease file - but I'm too brain
dead to find it in DDG...).
Engine errors:
$ cat engine.log | grep c3abcfe6-1062-48e2-8ca4-924b96b8c497
2022-02-07 13:31:24,729+02 INFO
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[22eec6d4-f0be-47db-b5d5-678bd84f47c6=STORAGE]',
sharedLocks=''}'
2022-02-07 13:31:24,747+02 INFO
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] Running command:
AttachStorageDomainToPoolCommand internal: false. Entities affected : ID:
22eec6d4-f0be-47db-b5d5-678bd84f47c6 Type: StorageAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN, ID:
257a27cc-87ec-11ec-bc62-00163e3fe79d Type: StoragePoolAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2022-02-07 13:31:24,782+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] START,
AttachStorageDomainVDSCommand(
AttachStorageDomainVDSCommandParameters:{storagePoolId='257a27cc-87ec-11ec-bc62-00163e3fe79d',
ignoreFailoverLimit='false',
storageDomainId='22eec6d4-f0be-47db-b5d5-678bd84f47c6'}), log id: 777adb39
2022-02-07 13:31:48,281+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] EVENT_ID:
IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command AttachStorageDomainVDS
failed: Storage domain already attached to pool:
'domain=22eec6d4-f0be-47db-b5d5-678bd84f47c6,
pool=2cb812a0-4a95-11eb-b3bc-00163e6a0a7c'
2022-02-07 13:31:48,281+02 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] Command
'AttachStorageDomainVDSCommand(
AttachStorageDomainVDSCommandParameters:{storagePoolId='257a27cc-87ec-11ec-bc62-00163e3fe79d',
ignoreFailoverLimit='false',
storageDomainId='22eec6d4-f0be-47db-b5d5-678bd84f47c6'})' execution failed:
IRSGenericException: IRSErrorException: Storage domain already attached to
pool: 'domain=22eec6d4-f0be-47db-b5d5-678bd84f47c6,
pool=2cb812a0-4a95-11eb-b3bc-00163e6a0a7c'
2022-02-07 13:31:48,281+02 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] FINISH,
AttachStorageDomainVDSCommand, return: , log id: 777adb39
2022-02-07 13:31:48,281+02 ERROR
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] Command
'org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
IRSGenericException: IRSErrorException: Storage domain already attached to
pool: 'domain=22eec6d4-f0be-47db-b5d5-678bd84f47c6,
pool=2cb812a0-4a95-11eb-b3bc-00163e6a0a7c' (Failed with error
StorageDomainAlreadyAttached and code 380)
2022-02-07 13:31:48,283+02 INFO
[org.ovirt.engine.core.bll.CommandCompensator] (default task-27)
[c3abcfe6-1062-48e2-8ca4-924b96b8c497] Command
[id=ec43c2c1-3441-4d91-9bd2-98280e9c0eaa]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
StoragePoolIsoMapId:{storagePoolId='257a27cc-87ec-11ec-bc62-00163e3fe79d',
storageId='22eec6d4-f0be-47db-b5d5-678bd84f47c6'}.
2022-02-07 13:31:48,290+02 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] EVENT_ID:
USER_ATTACH_STORAGE_DOMAIN_TO_POOL_FAILED(963), Failed to attach Storage
Domain local_export_storage_1 to Data Center Default. (User:
gilboa@internal-authz)
2022-02-07 13:31:48,294+02 INFO
[org.ovirt.engine.core.bll.storage.domain.AttachStorageDomainToPoolCommand]
(default task-27) [c3abcfe6-1062-48e2-8ca4-924b96b8c497] Lock freed to
object
'EngineLock:{exclusiveLocks='[22eec6d4-f0be-47db-b5d5-678bd84f47c6=STORAGE]',
sharedLocks=''}'
Thanks,
Gilboa
3 years, 2 months
how to search event not matching a user
by Gianluca Cecchi
Hello,
every event in Advanced view has a field "User".
I'm trying to compose a search in web admin of events with user different
from myuser@internal
It seems I'm not able to get what I want.
I also tried to base attempts on an old 2019 thread (on 4.3.6) where this
queries worked:
Disks: name=engine* or name=host*
Disks: alias=engine* or alias=host*
but now on 4.4.8 gives nothing even if matched.
Any hint and also documentation reference about the correct syntax to use
in 4.4.x?
Thanks,
Gianluca
3 years, 2 months
What happened to oVirt engine-setup?
by Richard W.M. Jones
A while back I had oVirt 4.4.7 installed which I used for testing.
For some reason that installation has died in some way, so I'm trying
to install a fresh new oVirt 4.4.10.
Last time I installed ovirt, it was very easy - I provisioned a couple
of machines, ran engine-setup in one, answered a few questions and
after a few minutes the engine was installed.
Somehow this has changed and now it's really far more complicated,
involving some ansible things and wanting to create VMs and ssh
everywhere.
Can I go back to the old/easy way of installing oVirt engine? And if
so, what happened to the instructions for that?
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
3 years, 2 months
VM Image is Locked after cleaning up of problematic snapshot.
by Muhammad Aidilfitri Bin Saleh
Hello Everyone,
I requires your input as the VM that is currently Hosted on Ovirt 4.4.4.5 is showing a locked Icon in the status column. I am unable to perform any actions on this particular VM as it shows that the "Snapshot is currently being created for VM <VMName>
There was a failed snapshot attempt executed earlier but I have gone through the steps to identified and unlock the said images and snapshot using the build tools /usr/share/ovirt-engine/setup/dbutils. I even went into the DB to identified the failed snapshot but still no luck.
I have restarted the ovirt-engine service and also the Standalone VM hosting this Ovirt-Engine but still not able to release the VM. I have also performed the ovirt-setup but still no luck on this portion.
Any assistance is appreciated.
3 years, 2 months
Hosted Engine down and not visible on any host
by milan.mithbaokar@gmail.com
Hi,
We have a 4 node ovirt cluster running at 4.4.1.5. We are running it on iSCSI shared Storage. The hosted engine is running on its separate iSCSI shared storage.
Hosted Engine went down all of a sudden and no where to be found on any of the nodes when I do virsh list --all
All other VM's are showed in the list but cannot see the hosted engine?
when we try to bring hosted engine up using "hosted-engine --vm-start" it errors out with message "vm does not exist". The shared iscsi storage ip on which the HE vm was running is pingable, but when we perform "hosted-engine --connect-storage" we are getting request timeouts after 60 seconds.
Any ideas to restore back the hosted engine and the cluster highly appreciated.
3 years, 2 months
Installing a windows vm from importing an OVA Template
by moizr@emiratesnbd.com
We are trying to create a windows vm on OLVM/OVirt. Below are the steps we followed
- Exported an OVF of a plain windows vm from vmware.
- Imported it as an OVA to OLVM
- Start the VM after change OS type to Windows 2016.
As soon as we select Windows 2016 the server doesnt start but works when we make it as other OS. We want to automated provisioning and setup of the vm through sysprep but the vm doesnt start once we select OS as Windows on the template or any vm created out of that template.
3 years, 2 months
Ignore CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION ?
by Richard W.M. Jones
2022-02-01 19:05:01,952Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-19) [330886aa] EVENT_ID: CPU_TYPE_UNSUPPORTED_IN_THIS_CLUSTER_VERSION(156), Host ovirt4410-host moved to Non-Operational state as host CPU type is not supported in this cluster compatibility version or is not supported at all
The host is a nested VM running on old hardware. I don't care that
it's not supported - this is just for testing copying and it'll
literally never even need to run a VM.
Is there a way to ignore this and continue?
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
libguestfs lets you edit virtual machines. Supports shell scripting,
bindings from many languages. http://libguestfs.org
3 years, 2 months
Erasure Coding Storage Support
by alishirv@pm.me
Hello everyone,
I'm new to oVirt. I want to set up a cost-efficient and fault-tolerant virtualization environment. Our VMs have low to medium IOPS, 70% of VMs have low IOPS and about 30% of them require about 5K IOPS. All the disks are NVMe. I found that the most cost-efficient storage plan in the current oVirt is a two-way replica with an arbiter and its storage overhead is more than 100%. As far as I know, we can use erasure coding with less storage overhead. Is there any plan to support erasure coding in oVirt? Would you even recommend erasure coding for VM workloads?
Best regards,
Ali
3 years, 2 months
How to clean stale engine from "hosted-engine unknow stale-data"
by Ayansh Rocks
Hi All,
How to clean stale hosted engine from the database. I want to remove host
id=3
Ovirt - 4.4.9
[root@iondelsvr14 vdsm]# hosted-engine --vm-status
--== Host iondelsvr10.iontrading.com (id: 1) status ==--
Host ID : 1
Host timestamp : 2632783
Score : 3400
Engine status : {"vm": "down", "health": "bad",
"detail": "unknown", "reason": "vm not running on this host"}
Hostname : iondelsvr10.iontrading.com
Local maintenance : False
stopped : False
crc32 : 882bb390
conf_on_shared_storage : True
local_conf_timestamp : 2632783
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=2632783 (Thu Feb 3 09:00:14 2022)
host-id=1
score=3400
vm_conf_refresh_time=2632783 (Thu Feb 3 09:00:14 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host iondelsvr11.iontrading.com (id: 2) status ==--
Host ID : 2
Host timestamp : 6215454
Score : 3400
Engine status : {"vm": "up", "health": "good",
"detail": "Up"}
Hostname : iondelsvr11.iontrading.com
Local maintenance : False
stopped : False
crc32 : 30153e90
conf_on_shared_storage : True
local_conf_timestamp : 6215454
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=6215454 (Thu Feb 3 09:00:16 2022)
host-id=2
score=3400
vm_conf_refresh_time=6215454 (Thu Feb 3 09:00:16 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
--== Host iondelsvr12.iontrading.com (id: 3) status ==--
Host ID : 3
Host timestamp : 734
Score : 1800
Engine status : unknown stale-data
Hostname : iondelsvr12.iontrading.com
Local maintenance : False
stopped : False
crc32 : 062e92e9
conf_on_shared_storage : True
local_conf_timestamp : 734
Status up-to-date : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=734 (Fri Jan 28 17:35:18 2022)
host-id=3
score=1800
vm_conf_refresh_time=734 (Fri Jan 28 17:35:18 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
3 years, 2 months
[ANN] oVirt Node 4.4.10.1 Async update
by Lev Veyde
On February 2nd 2022 the oVirt project released an async update of oVirt
Node (4.4.10.1) delivering fixes for the following issues:
Bug 2048888 <https://bugzilla.redhat.com/show_bug.cgi?id=2048888> - Bad
repositories on fresh install of oVirt Node
Bug 2046038 <https://bugzilla.redhat.com/show_bug.cgi?id=2046038> -
CVE-2021-4034 <https://access.redhat.com/security/cve/CVE-2021-4034>
polkit: Local privilege escalation in pkexec due to incorrect handling of
argument vector [ovirt-4.4]
Several bug fixes and improvements from CentOS Stream.
Here’s the full list of changes:
ovirt-node-ng-4.4.10
ovirt-node-ng-4.4.10.1
NetworkManager 1.36.0-0.3.el8
1.36.0-0.4.el8
NetworkManager-config-server 1.36.0-0.3.el8
1.36.0-0.4.el8
NetworkManager-libnm 1.36.0-0.3.el8
1.36.0-0.4.el8
NetworkManager-ovs 1.36.0-0.3.el8
1.36.0-0.4.el8
NetworkManager-team 1.36.0-0.3.el8
1.36.0-0.4.el8
NetworkManager-tui 1.36.0-0.3.el8
1.36.0-0.4.el8
audispd-plugins 3.0-0.17.20191104git1c2f876.el8
3.0.7-1.el8
audit 3.0-0.17.20191104git1c2f876.el8
3.0.7-1.el8
audit-libs 3.0-0.17.20191104git1c2f876.el8
3.0.7-1.el8
bind-export-libs 9.11.26-6.el8
9.11.36-2.el8
bind-libs 9.11.26-6.el8
9.11.36-2.el8
bind-libs-lite 9.11.26-6.el8
9.11.36-2.el8
bind-license 9.11.26-6.el8
9.11.36-2.el8
bind-utils 9.11.26-6.el8
9.11.36-2.el8
binutils 2.30-112.el8
2.30-113.el8
centos-gpg-keys 8-3.el8
8-4.el8
centos-stream-repos 8-3.el8
8-4.el8
clevis 15-6.el8
15-8.el8
clevis-dracut 15-6.el8
15-8.el8
clevis-luks 15-6.el8
15-8.el8
clevis-systemd 15-6.el8
15-8.el8
cockpit 260-1.el8
261-1.el8
cockpit-bridge 260-1.el8
261-1.el8
cockpit-storaged 259-1.el8
261-1.el8
cockpit-system 260-1.el8
261-1.el8
cockpit-ws 260-1.el8
261-1.el8
collectd 5.11.0-2.el8
5.12.0-7.el8s
collectd-disk 5.11.0-2.el8
5.12.0-7.el8s
collectd-netlink 5.11.0-2.el8
5.12.0-7.el8s
collectd-virt 5.11.0-2.el8
5.12.0-7.el8s
collectd-write_http 5.11.0-2.el8
5.12.0-7.el8s
collectd-write_syslog 5.11.0-2.el8
5.12.0-7.el8s
cups-libs 2.2.6-41.el8
2.2.6-42.el8
device-mapper-multipath 0.8.4-20.el8
0.8.4-21.el8
device-mapper-multipath-libs 0.8.4-20.el8
0.8.4-21.el8
dhcp-client 4.3.6-45.el8
4.3.6-47.el8.0.1
dhcp-common 4.3.6-45.el8
4.3.6-47.el8.0.1
dhcp-libs 4.3.6-45.el8
4.3.6-47.el8.0.1
dnf-plugins-core 4.0.21-7.el8
4.0.21-8.el8
fence-agents-all 4.2.1-83.el8
4.2.1-85.el8
fence-agents-amt-ws 4.2.1-83.el8
4.2.1-85.el8
fence-agents-apc 4.2.1-83.el8
4.2.1-85.el8
fence-agents-apc-snmp 4.2.1-83.el8
4.2.1-85.el8
fence-agents-bladecenter 4.2.1-83.el8
4.2.1-85.el8
fence-agents-brocade 4.2.1-83.el8
4.2.1-85.el8
fence-agents-cisco-mds 4.2.1-83.el8
4.2.1-85.el8
fence-agents-cisco-ucs 4.2.1-83.el8
4.2.1-85.el8
fence-agents-common 4.2.1-83.el8
4.2.1-85.el8
fence-agents-compute 4.2.1-83.el8
4.2.1-85.el8
fence-agents-drac5 4.2.1-83.el8
4.2.1-85.el8
fence-agents-eaton-snmp 4.2.1-83.el8
4.2.1-85.el8
fence-agents-emerson 4.2.1-83.el8
4.2.1-85.el8
fence-agents-eps 4.2.1-83.el8
4.2.1-85.el8
fence-agents-heuristics-ping 4.2.1-83.el8
4.2.1-85.el8
fence-agents-hpblade 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ibm-powervs
4.2.1-83.el8
fence-agents-ibm-vpc
4.2.1-83.el8
fence-agents-ibmblade 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ifmib 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ilo-moonshot 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ilo-mp 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ilo-ssh 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ilo2 4.2.1-83.el8
4.2.1-85.el8
fence-agents-intelmodular 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ipdu 4.2.1-83.el8
4.2.1-85.el8
fence-agents-ipmilan 4.2.1-83.el8
4.2.1-85.el8
fence-agents-kdump 4.2.1-83.el8
4.2.1-85.el8
fence-agents-mpath 4.2.1-83.el8
4.2.1-85.el8
fence-agents-redfish 4.2.1-83.el8
4.2.1-85.el8
fence-agents-rhevm 4.2.1-83.el8
4.2.1-85.el8
fence-agents-rsa 4.2.1-83.el8
4.2.1-85.el8
fence-agents-rsb 4.2.1-83.el8
4.2.1-85.el8
fence-agents-sbd 4.2.1-83.el8
4.2.1-85.el8
fence-agents-scsi 4.2.1-83.el8
4.2.1-85.el8
fence-agents-vmware-rest 4.2.1-83.el8
4.2.1-85.el8
fence-agents-vmware-soap 4.2.1-83.el8
4.2.1-85.el8
fence-agents-wti 4.2.1-83.el8
4.2.1-85.el8
glibc 2.28-181.el8
2.28-184.el8
glibc-common 2.28-181.el8
2.28-184.el8
glibc-langpack-en 2.28-181.el8
2.28-184.el8
glusterfs 8.6-2.el8
8.6-2.el8s
glusterfs-cli 8.6-2.el8
8.6-2.el8s
glusterfs-client-xlators 8.6-2.el8
8.6-2.el8s
glusterfs-events 8.6-2.el8
8.6-2.el8s
glusterfs-fuse 8.6-2.el8
8.6-2.el8s
glusterfs-geo-replication 8.6-2.el8
8.6-2.el8s
glusterfs-selinux 2.0.1-1.el8
2.0.1-1.el8s
glusterfs-server 8.6-2.el8
8.6-2.el8s
iproute 5.15.0-1.el8
5.15.0-2.el8
iproute-tc 5.15.0-1.el8
5.15.0-2.el8
kexec-tools 2.0.20-67.el8
2.0.20-68.el8
kpartx 0.8.4-20.el8
0.8.4-21.el8
libarchive 3.3.3-1.el8
3.3.3-3.el8_5
libblkid 2.32.1-28.el8
2.32.1-32.el8
libbpf 0.4.0-2.el8
0.4.0-3.el8
libfdisk 2.32.1-28.el8
2.32.1-32.el8
libgcc 8.5.0-7.el8
8.5.0-10.el8
libgfapi0 8.6-2.el8
8.6-2.el8s
libgfchangelog0 8.6-2.el8
8.6-2.el8s
libgfrpc0 8.6-2.el8
8.6-2.el8s
libgfxdr0 8.6-2.el8
8.6-2.el8s
libglusterd0 8.6-2.el8
8.6-2.el8s
libglusterfs0 8.6-2.el8
8.6-2.el8s
libgomp 8.5.0-7.el8
8.5.0-10.el8
libmount 2.32.1-28.el8
2.32.1-32.el8
libnftnl 1.1.5-4.el8
1.1.5-5.el8
libsmartcols 2.32.1-28.el8
2.32.1-32.el8
libstdc++ 8.5.0-7.el8
8.5.0-10.el8
libsysfs 2.1.0-24.el8
2.1.0-25.el8
libuuid 2.32.1-28.el8
2.32.1-32.el8
libwbclient 4.15.3-0.el8
4.15.4-0.el8
llvm-compat-libs
12.0.1-4.module_el8.6.0+1041+0c503ac4
llvm-libs
13.0.0-3.module_el8.6.0+1029+6594c364
mdevctl 0.81-1.el8
1.1.0-2.el8
mesa-dri-drivers 21.3.0-1.el8
21.3.4-1.el8
mesa-filesystem 21.3.0-1.el8
21.3.4-1.el8
mesa-libEGL 21.3.0-1.el8
21.3.4-1.el8
mesa-libGL 21.3.0-1.el8
21.3.4-1.el8
mesa-libgbm 21.3.0-1.el8
21.3.4-1.el8
mesa-libglapi 21.3.0-1.el8
21.3.4-1.el8
nispor 1.2.2-1.el8
1.2.3-1.el8
nmstate 1.2.0-1.el8
1.2.1-0.2.alpha2.el8
nmstate-plugin-ovsdb 1.2.0-1.el8
1.2.1-0.2.alpha2.el8
ovirt-node-ng-image-update-placeholder 4.4.10-1.el8
4.4.10.1-1.el8
ovirt-release-host-node 4.4.10-1.el8
4.4.10.1-1.el8
ovirt-release44 4.4.10-1.el8
4.4.10.1-1.el8
pacemaker-cluster-libs 2.1.2-2.el8
2.1.2-4.el8
pacemaker-libs 2.1.2-2.el8
2.1.2-4.el8
pacemaker-schemas 2.1.2-2.el8
2.1.2-4.el8
platform-python 3.6.8-44.el8
3.6.8-45.el8
policycoreutils 2.9-17.el8
2.9-18.el8
policycoreutils-python-utils 2.9-17.el8
2.9-18.el8
polkit 0.115-12.el8
0.115-13.el8_5.1
polkit-libs 0.115-12.el8
0.115-13.el8_5.1
postfix 3.5.8-2.el8
3.5.8-3.el8
python3-audit 3.0-0.17.20191104git1c2f876.el8
3.0.7-1.el8
python3-bind 9.11.26-6.el8
9.11.36-2.el8
python3-cloud-what 1.28.24-1.el8
1.28.25-1.el8
python3-dnf-plugin-versionlock 4.0.21-7.el8
4.0.21-8.el8
python3-dnf-plugins-core 4.0.21-7.el8
4.0.21-8.el8
python3-ethtool 0.14-3.el8
0.14-4.el8
python3-gluster 8.6-2.el8
8.6-2.el8s
python3-libnmstate 1.2.0-1.el8
1.2.1-0.2.alpha2.el8
python3-libs 3.6.8-44.el8
3.6.8-45.el8
python3-lxml 4.2.3-3.el8
4.2.3-4.el8
python3-nispor 1.2.2-1.el8
1.2.3-1.el8
python3-policycoreutils 2.9-17.el8
2.9-18.el8
python3-subscription-manager-rhsm 1.28.24-1.el8
1.28.25-1.el8
python3-syspurpose 1.28.24-1.el8
1.28.25-1.el8
rsyslog 8.2102.0-6.el8
8.2102.0-7.el8
rsyslog-elasticsearch 8.2102.0-6.el8
8.2102.0-7.el8
rsyslog-mmjsonparse 8.2102.0-6.el8
8.2102.0-7.el8
rsyslog-mmnormalize 8.2102.0-6.el8
8.2102.0-7.el8
rsyslog-openssl 8.2102.0-6.el8
8.2102.0-7.el8
samba-client-libs 4.15.3-0.el8
4.15.4-0.el8
samba-common 4.15.3-0.el8
4.15.4-0.el8
samba-common-libs 4.15.3-0.el8
4.15.4-0.el8
selinux-policy 3.14.3-86.el8
3.14.3-89.el8
selinux-policy-targeted 3.14.3-86.el8
3.14.3-89.el8
sos 4.2-11.el8
4.2-13.el8
subscription-manager-rhsm-certificates 1.28.24-1.el8
1.28.25-1.el8
sysfsutils 2.1.0-24.el8
2.1.0-25.el8
systemd 239-51.el8_5.2
239-56.el8
systemd-container 239-51.el8_5.2
239-56.el8
systemd-libs 239-51.el8_5.2
239-56.el8
systemd-pam 239-51.el8_5.2
239-56.el8
systemd-udev 239-51.el8_5.2
239-56.el8
unzip 6.0-45.el8
6.0-46.el8
util-linux 2.32.1-28.el8
2.32.1-32.el8
vim-minimal 8.0.1763-16.el8_5.3
8.0.1763-16.el8_5.4
virt-install 3.2.0-2.el8
3.2.0-3.el8
virt-manager-common 3.2.0-2.el8
3.2.0-3.el8
An updated ovirt-appliance will be released soon as well.
Thanks in advance,
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 2 months
help install oVirt Engine on CentOS 9 Stream
by david
hi all
can't install oVirt engine on Centos Stream 9
# dnf repolist
repo id repo name
appstream CentOS Stream 9 - AppStream
baseos CentOS Stream 9 - BaseOS
extras-common CentOS Stream 9 - Extras packages
ovirt-4.4 Latest oVirt 4.4 Release
# dnf install ovirt-engine
Last metadata expiration check: 0:15:25 ago on Thu 03 Feb 2022 10:41:41 AM +04.
No match for argument: ovirt-engine
Error: Unable to find a match: ovirt-engine
# yum search ovirt
Last metadata expiration check: 0:07:44 ago on Thu 03 Feb 2022 10:41:41 AM +04.
======================================================================================================= Name & Summary Matched: ovirt
ovirt-hosted-engine-setup.noarch : oVirt Hosted Engine setup tool
ovirt-imageio-client.x86_64 : oVirt imageio client library
ovirt-imageio-common.x86_64 : oVirt imageio common resources
ovirt-imageio-common-debuginfo.x86_64 : Debug information for package ovirt-imageio-common
ovirt-imageio-daemon.x86_64 : oVirt imageio daemon
ovirt-imageio-debugsource.x86_64 : Debug sources for package ovirt-imageio
ovirt-release44.noarch : The oVirt repository configuration
============================================================================================================ Name Matched: ovirt
how to fox it ?
thanks!
3 years, 2 months
help install oVirt Engine on Centos 9 Stream
by david
hi all
can't install oVirt engine on Centos Stream 9
# dnf repolist
repo id
repo name
appstream
CentOS Stream 9 - AppStream
baseos
CentOS Stream 9 - BaseOS
extras-common
CentOS Stream 9 - Extras packages
ovirt-4.4
Latest oVirt 4.4 Release
# dnf install ovirt-engine
Last metadata expiration check: 0:15:25 ago on Thu 03 Feb 2022 10:41:41 AM
+04.
No match for argument: ovirt-engine
Error: Unable to find a match: ovirt-engine
# yum search ovirt
Last metadata expiration check: 0:07:44 ago on Thu 03 Feb 2022 10:41:41 AM
+04.
=======================================================================================================
Name & Summary Matched: ovirt
ovirt-hosted-engine-setup.noarch : oVirt Hosted Engine setup tool
ovirt-imageio-client.x86_64 : oVirt imageio client library
ovirt-imageio-common.x86_64 : oVirt imageio common resources
ovirt-imageio-common-debuginfo.x86_64 : Debug information for package
ovirt-imageio-common
ovirt-imageio-daemon.x86_64 : oVirt imageio daemon
ovirt-imageio-debugsource.x86_64 : Debug sources for package ovirt-imageio
ovirt-release44.noarch : The oVirt repository configuration
============================================================================================================
Name Matched: ovirt
how to fix it ?
thanks
3 years, 2 months
Ovirt host issue
by Ayansh Rocks
Hi All,
I was adding ovirt host into the cluster(along with the engine deploy
option) but somehow due to network issues it failed and then I removed the
host from the cluster. But I got the below error. What should I do ?
Ovirt Release - 4.4.9
Unconfiguring OVN on host iondelsvr12.iontrading.com might have failed.
Please ensure that the host is removed from the OVN Southbound database by
using '/usr/share/ovirt-provider-ovn/scripts/remove_chassis.sh
iondelsvr12.iontrading.com' manually.
Thanks
3 years, 2 months
gluster and virtualization
by eevans@digitaldatatechs.com
My setup is 3 ovirt nodes that run gluster independently of the engine server, even thought the engine still controls it. So 4 nodes, one engine and 3 clustered nodes.
This has been and running with no issues except this:
But now my arbiter node will not load the gluster drive when virtualization is enable in the BIOS. I've been scratching my head on this and need some direction.
I am attaching the error.
https://1drv.ms/u/s!AvgvEzKKSZHbhMRQmUHDvv_Xv7dkhw?e=QGdfYR
Keep in mind, this error does not occur is VT is turned off..it boots normally.
Thanks in advance.
3 years, 2 months
Host needs to be reinstalled after configuring power management
by Andrew DeMaria
Hi,
I am running ovirt 4.3 and have found the following action item immediately
after configuring power management for a host:
Host needs to be reinstalled as important configuration changes were
applied on it.
The thing is - I've just freshly installed this host and it seems strange
that I need to reinstall it.
Is there a better way to install a host and configure power management
without having to reinstall it after?
Thanks,
Andrew
3 years, 2 months
migration infinite loop
by Renaud RAKOTOMALALA
Hi,
I switch an host in "PreparingForMaintenance" state to prepare an
upgrade. In this stage the host migrating all VM but failed on one due
to a network's congestion.
Because the host try to migrate this VM and the constant failure of
the migration, we enter in an infinite loop.
How can I stop this loop ? I’m not able to cancel the mainteance mode
with the WebUI.
Any idea ?
Thanks
--
Renaud
3 years, 2 months
Jenkins upgrade
by Denis Volkov
Hello
I'm planning to upgrade jenkins.ovirt.org to latest LTS version (2.319.2).
I moved jenkins to maintenance mode, so no new jobs can be started at the
moment. Planned upgrade start time 10AM UTC. Planned downtime - 30 minutes.
--
Denis Volkov
3 years, 2 months
Cinderlib RBD ceph template issues
by Sketch
This is on oVirt 4.4.8, engine on CS8, hosts on C8, cluster and DC are
both set to 4.6.
With a newly configured cinderlib/ceph RBD setup. I can create new VM
images, and copy existing VM images, but I can't copy existing template
images to RBD. When I do, I try, I get this error in cinderlib.log (see
below), which sounds like the disk already exists there, but it definitely
does not. This leaves me unable to create new VMs on RBD, only migrate
existing VM disks.
2021-09-01 04:31:05,881 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2021-09-01 04:31:05,882 - cinderlib-client - INFO - Creating volume '0e8b9aca-1eb1-4837-ac9e-cb3d8f4c1676', with size '500' GB [5c5d0a6b]
2021-09-01 04:31:05,943 - cinderlib-client - ERROR - Failure occurred when trying to run command 'create_volume': Entity '<class 'cinder.db.sqlalchemy.models.Volume'>' has no property 'glance_metadata' [5c5d0a6b]
2021-09-01 04:31:05,944 - cinder - CRITICAL - Unhandled error
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 455, in create
self._raise_with_resource()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource
six.reraise(*exc_info)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 448, in create
model_update = self.backend.driver.create_volume(self._ovo)
File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 986, in create_volume
features=client.features)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
six.reraise(c, e, tb)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
File "rbd.pyx", line 629, in rbd.RBD.create
rbd.ImageExists: [errno 17] RBD image already exists (error creating image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 399, in _entity_descriptor
return getattr(entity, key)
AttributeError: type object 'Volume' has no attribute 'glance_metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 170, in main
args.command(args)
File "./cinderlib-client.py", line 208, in create_volume
backend.create_volume(int(args.size), id=args.volume_id)
File "/usr/lib/python3.6/site-packages/cinderlib/cinderlib.py", line 175, in create_volume
vol.create()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 457, in create
self.save()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 628, in save
self.persistence.set_volume(self)
File "/usr/lib/python3.6/site-packages/cinderlib/persistence/dbms.py", line 254, in set_volume
self.db.volume_update(objects.CONTEXT, volume.id, changed)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 236, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 184, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 2570, in volume_update
result = query.filter_by(id=volume_id).update(values)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 3818, in update
update_op.exec_()
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1670, in exec_
self._do_pre_synchronize()
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1743, in _do_pre_synchronize
self._additional_evaluators(evaluator_compiler)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1912, in _additional_evaluators
values = self._resolved_values_keys_as_propnames
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1831, in _resolved_values_keys_as_propnames
for k, v in self._resolved_values:
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1818, in _resolved_values
desc = _entity_descriptor(self.mapper, k)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 402, in _entity_descriptor
"Entity '%s' has no property '%s'" % (description, key)
sqlalchemy.exc.InvalidRequestError: Entity '<class 'cinder.db.sqlalchemy.models.Volume'>' has no property 'glance_metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 390, in <module>
sys.exit(main(sys.argv[1:]))
File "./cinderlib-client.py", line 176, in main
sys.stderr.write(traceback.format_exc(e))
File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc
return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception
type(value), value, tb, limit=limit).format(chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 498, in __init__
_seen=_seen)
File "/usr/lib64/python3.6/traceback.py", line 498, in __init__
_seen=_seen)
File "/usr/lib64/python3.6/traceback.py", line 509, in __init__
capture_locals=capture_locals)
File "/usr/lib64/python3.6/traceback.py", line 338, in extract
if limit >= 0:
TypeError: '>=' not supported between instances of 'InvalidRequestError' and 'int'
3 years, 2 months
Ovirt host error [Cannot configure LVM filter]
by Ayansh Rocks
Hello All,
I was adding few new hosts into the ovirt cluster but a couple of them gave
this error.
Ovirt Release - 4.4.10
Installing Host iondelsvr14.iontrading.com. Check for LVM filter
configuration error: Cannot configure LVM filter on host, please run:
vdsm-tool config-lvm-filter.
However, few of them are able to configure it.
Thanks
3 years, 2 months
gerrit.ovirt.org upgrade
by Denis Volkov
Hello
Gerrit.ovirt.org will be upgraded later today. Approximate start time is
4PM UTC.
During the upgrade service will be shut down and so will not be available.
--
Denis Volkov
3 years, 2 months
Incorrect percentages vol dashboard client
by Ilya Fedotov
Hello, ovirt
[image: image.png]
I have a difficult situation with the representation of the calculation of
the volume of disk data. I copied some of the information about it.
How can used 130% of the information ?
oVirt VM Portal
- Version 1.7.2-1
- oVirt API Version 4.4
Thanks for replay.
with br, Ilya Fedotov
3 years, 2 months
no snaphot but engines complains there is one when trying to remove disk
by Nathanaël Blanchet
Hi all,
A colleague launched this morning a snapshot creation, there was no
error message but he wasn't able to start the vm anymore, with this
issue : VM PSI-SYB-DEV is down with error. Exit message: Unable to get
volume size for domain a5be6cae-f0c8-452f-b7cd-70d0e5eed710 volume
109fac1e-c2e3-4ba6-9867-5d1c94d3a447..
I tried many things but nothing works: it is impossible to remove this
disk, or copy it, or clone the vm :
Cannot detach Virtual Disk. The disk is already configured in a
snapshot. In order to detach it, remove the disk's snapshots.
There is no snaphost listed into webui or api (so not snapshot id),
while disk_snapshots exists into the associated storage domain.
So I tried to remove disks snapshot direvctly with API followind the
DELETE method
|DELETE /api/storagedomains/{storage_id}/disksnapshots/{image_id}|
[Cannot remove Disk Snapshot. VM\'s Snapshot does not exist.]
In conclusion: vm_disks can't be removed because of the non existent
associated vm's snapshot id.
I tried to remove vm_snapshots disks references directly into the
postgres DB but I didn't find the correct table.
What can I do now?
Your precious help is welcome.
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 2 months