Cinderlib RBD ceph template issues
by Sketch
This is on oVirt 4.4.8, engine on CS8, hosts on C8, cluster and DC are
both set to 4.6.
With a newly configured cinderlib/ceph RBD setup. I can create new VM
images, and copy existing VM images, but I can't copy existing template
images to RBD. When I do, I try, I get this error in cinderlib.log (see
below), which sounds like the disk already exists there, but it definitely
does not. This leaves me unable to create new VMs on RBD, only migrate
existing VM disks.
2021-09-01 04:31:05,881 - cinder.volume.driver - INFO - Driver hasn't implemented _init_vendor_properties()
2021-09-01 04:31:05,882 - cinderlib-client - INFO - Creating volume '0e8b9aca-1eb1-4837-ac9e-cb3d8f4c1676', with size '500' GB [5c5d0a6b]
2021-09-01 04:31:05,943 - cinderlib-client - ERROR - Failure occurred when trying to run command 'create_volume': Entity '<class 'cinder.db.sqlalchemy.models.Volume'>' has no property 'glance_metadata' [5c5d0a6b]
2021-09-01 04:31:05,944 - cinder - CRITICAL - Unhandled error
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 455, in create
self._raise_with_resource()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 222, in _raise_with_resource
six.reraise(*exc_info)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 448, in create
model_update = self.backend.driver.create_volume(self._ovo)
File "/usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py", line 986, in create_volume
features=client.features)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 190, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 148, in proxy_call
rv = execute(f, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 129, in execute
six.reraise(c, e, tb)
File "/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
raise value
File "/usr/lib/python3.6/site-packages/eventlet/tpool.py", line 83, in tworker
rv = meth(*args, **kwargs)
File "rbd.pyx", line 629, in rbd.RBD.create
rbd.ImageExists: [errno 17] RBD image already exists (error creating image)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 399, in _entity_descriptor
return getattr(entity, key)
AttributeError: type object 'Volume' has no attribute 'glance_metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 170, in main
args.command(args)
File "./cinderlib-client.py", line 208, in create_volume
backend.create_volume(int(args.size), id=args.volume_id)
File "/usr/lib/python3.6/site-packages/cinderlib/cinderlib.py", line 175, in create_volume
vol.create()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 457, in create
self.save()
File "/usr/lib/python3.6/site-packages/cinderlib/objects.py", line 628, in save
self.persistence.set_volume(self)
File "/usr/lib/python3.6/site-packages/cinderlib/persistence/dbms.py", line 254, in set_volume
self.db.volume_update(objects.CONTEXT, volume.id, changed)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 236, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 184, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/cinder/db/sqlalchemy/api.py", line 2570, in volume_update
result = query.filter_by(id=volume_id).update(values)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/query.py", line 3818, in update
update_op.exec_()
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1670, in exec_
self._do_pre_synchronize()
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1743, in _do_pre_synchronize
self._additional_evaluators(evaluator_compiler)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1912, in _additional_evaluators
values = self._resolved_values_keys_as_propnames
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1831, in _resolved_values_keys_as_propnames
for k, v in self._resolved_values:
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/persistence.py", line 1818, in _resolved_values
desc = _entity_descriptor(self.mapper, k)
File "/usr/lib64/python3.6/site-packages/sqlalchemy/orm/base.py", line 402, in _entity_descriptor
"Entity '%s' has no property '%s'" % (description, key)
sqlalchemy.exc.InvalidRequestError: Entity '<class 'cinder.db.sqlalchemy.models.Volume'>' has no property 'glance_metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./cinderlib-client.py", line 390, in <module>
sys.exit(main(sys.argv[1:]))
File "./cinderlib-client.py", line 176, in main
sys.stderr.write(traceback.format_exc(e))
File "/usr/lib64/python3.6/traceback.py", line 167, in format_exc
return "".join(format_exception(*sys.exc_info(), limit=limit, chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 121, in format_exception
type(value), value, tb, limit=limit).format(chain=chain))
File "/usr/lib64/python3.6/traceback.py", line 498, in __init__
_seen=_seen)
File "/usr/lib64/python3.6/traceback.py", line 498, in __init__
_seen=_seen)
File "/usr/lib64/python3.6/traceback.py", line 509, in __init__
capture_locals=capture_locals)
File "/usr/lib64/python3.6/traceback.py", line 338, in extract
if limit >= 0:
TypeError: '>=' not supported between instances of 'InvalidRequestError' and 'int'
2 years, 9 months
Import an exported VM using Ansible
by paolo@airaldi.it
Hello everybody!
I'm trying to automate a copy of a VM from one Datacenter to another using an Ansible.playbook.
I'm able to:
- Create a snapshot of the source VM
- create a clone from the snapshot
- remove the snapshot
- attach an Export Domain
- export the clone to the Export Domain
- remove the clone
- detach the Export domain from the source Datacenter and attach to the destination.
Unfortunately I cannot find a module to:
- import the VM from the Export Domain
- delete the VM image from the Export Domain.
Any hint on how to do that?
Thanks in advance. Cheers.
Paolo
PS: if someone is interested I can share the playbook.
2 years, 10 months
did 4.3.9 reset bug https://bugzilla.redhat.com/show_bug.cgi?id=1590266
by kelley bryan
I am experiencing the error message in the ovirt-hosted-engine-setup-ansible-create_target_vm log
{2020-05-06 14:15:30,024-0500 ERROR ansible failed {'status': 'FAILED', 'ansible_type': 'task', 'ansible_task': u"Fail if Engine IP is different from engine's he_fqdn resolved IP", 'ansible_result': u'type: <type \'dict\'>\nstr: {\'msg\': u"Engine VM IP address is while the engine\'s he_fqdn ovirt1-engine.kelleykars.org resolves to 192.168.122.2. If you are using DHCP, check your DHCP reservation configuration", \'changed\': False, \'_ansible_no_log\': False}', 'task_duration': 1, 'ansible_host': u'localhost', 'ansible_playbook': u'/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml'}}:Q!
The bug 1590266 says it should report the engine VM IP address xxx.xxx.xxx.xxx while the Engines he_fqdn is xxxxxxxxx
I need to see what it thins is wrong as both dig fqdn engine name and dig -x ip return the correct information.
Now this bug looks like it may play but I don't see the failed rediness check in the this log https://access.redhat.com/solutions/4462431
or is it because the vm fails or dies or ???
2 years, 10 months
After upgrade to vdsm-4.40.90.4-1.el8 - Internal JSON-RPC error - how to fix?
by John Mortensen
Hi,
After we upgraded to vdsm-4.40.90.4-1.el8 on our two node cluster two things has happned:
1. First node that was upgraded now continuously logs this error:
VDSM <node name> command Get Host Statistics failed: Internal JSON-RPC error: {'reason': "'str' object has no attribute 'decode'"}
2. During import of two virtual machines from VMware (has done multible before upgrade) the import seems to never finish - currently running on day 2-3... any clues how to fix this?
/John
2 years, 10 months
Lots of storage.MailBox.SpmMailMonitor
by Fabrice Bacchella
My vdsm log files are huge:
-rw-r--r-- 1 vdsm kvm 1.8G Nov 22 11:32 vdsm.log
And this is juste half an hour of logs:
$ head -1 vdsm.log
2018-11-22 11:01:12,132+0100 ERROR (mailbox-spm) [storage.MailBox.SpmMailMonitor] mailbox 2 checksum failed, not clearing mailbox, clearing new mail (data='...lots of data', expected='\xa4\x06\x08\x00') (mailbox:612)
I just upgraded vdsm:
$ rpm -qi vdsm
Name : vdsm
Version : 4.20.43
2 years, 10 months
Suggested upgrading path from CentOS based 4.4.8 to 4.4.9
by Gianluca Cecchi
I have a lab with an environment based on 4.4.8.6-1, with 3 CentOS Linux
8.4 hosts and a CentOS 8.4 external engine system (that is a VM on vSphere,
so that I can leverage a snapshot methodology for the process...).
I would like to pass to 4.4.9 and retain a full plain OS on hosts for the
moment, without going through oVirt nodes, but standing the repo problems
and CentOS 8.x going through EOL this is what I'm planning to do:
1. stop engine service on engine system
2. convert engine to CentOS Stream
This step needs some confirmation.
Could you provide an official link about the process?
I'm not able to find it again. Is it a problem of mine or all (CentOS
website, RHEL website) seem to point only to conversion from CentOS Linux
to RHEL??
Apart external websites provided workflows, I was only able to find a mid
January youtube video, when CentOS was based on 8.3, with these steps:
yum install centos-release-stream
yum swap centos-{linux,stream}-repos
yum repolist
yum distro-sync
reboot
The video link is here:
https://www.youtube.com/watch?v=Ba2ytp_8x7s
No mention at
https://www.redhat.com/en/blog/faq-centos-stream-updates
And on CentOS page I only found this:
https://centos.org/distro-faq/
with Q7 containing only the two instructions:
dnf swap centos-linux-repos centos-stream-repos
dnf distro-sync
What to use safely?
Is it possible to include some sort of documentation or links on oVirt
page, to migrate from CentOS Linux to CentOS Stream for oVirt upgrade
purposes?
3. After reboot implied, I think, in step 2., use the usual steps to update
engine to 4.4.9
4. update the first out of three hosts from CentOS Linux to CentOS Stream
and to 4.4.9.
4.a follow the same approach of engine (when defined) and pass it to Stream
retaining the 4.4.8.
4.b upgrade from the web admin gui to 4.4.9
5. Do the same for second host and third hosts
Any hints, comments, limitations in having mixed 4.4.8 and 4.4.9 hosts for
a while and such?
Thanks,
Gianluca
2 years, 10 months
[FOSDEM][CFP] Virtualization & IaaS Devroom
by Piotr Kliczewski
We are excited to announce that the call for proposals is now open for the
Virtualization & IaaS devroom at the upcoming FOSDEM 2022, to be hosted
virtually on February 5th 2022.
This year will mark FOSDEM’s 22nd anniversary as one of the longest-running
free and open source software developer events, attracting thousands of
developers and users from all over the world. Due to Covid-19, FOSDEM will
be held virtually this year on February 5th & 6th, 2022.
About the Devroom
The Virtualization & IaaS devroom will feature session topics such as open
source hypervisors and virtual machine managers such as Xen Project, KVM,
bhyve, and VirtualBox, and Infrastructure-as-a-Service projects such as
KubeVirt, Apache CloudStack, Foreman, OpenStack, oVirt, QEMU and OpenNebula.
This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with multiple
hypervisors; hyperconverged deployments; and scaling across hundreds or
thousands of servers.
Presentations in this devroom will be aimed at users or developers working
on these platforms who are looking to collaborate and improve shared
infrastructure or solve common problems. We seek topics that encourage
dialog between projects and continued work post-FOSDEM.
Important Dates
Submission deadline: 20th of December
Acceptance notifications: 25th of December
Final schedule announcement: 31st of December
Recorded presentations upload deadline: 15th of January
Devroom: 6th February 2022
Submit Your Proposal
All submissions must be made via the Pentabarf event planning site[1]. If
you have not used Pentabarf before, you will need to create an account. If
you submitted proposals for FOSDEM in previous years, you can use your
existing account.
After creating the account, select Create Event to start the submission
process. Make sure to select Virtualization and IaaS devroom from the Track
list. Please fill out all the required fields, and provide a meaningful
abstract and description of your proposed session.
Submission Guidelines
We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.
All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q&A.
All presentations will need to be pre-recorded and put into our system at
least a couple of weeks before the event.
The presentations should be uploaded by 15th of January and made available
under Creative
Commons licenses. In the Submission notes field, please indicate that you
agree that your presentation will be licensed under the CC-By-SA-4.0 or
CC-By-4.0 license and that you agree to have your presentation recorded.
For example:
"If my presentation is accepted for FOSDEM, I hereby agree to license all
recordings, slides, and other associated materials under the Creative
Commons Attribution Share-Alike 4.0 International License. Sincerely,
<NAME>."
In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend the virtual FOSDEM event for the Q&A.
We will not consider proposals from prospective speakers who are unsure
whether they will be able to attend the FOSDEM virtual event.
If you are experiencing problems with Pentabarf, the proposal submission
interface, or have other questions, you can email our devroom mailing
list[2] and we will try to help you.
Code of Conduct
Following the release of the updated code of conduct for FOSDEM, we'd like
to remind all speakers and attendees that all of the presentations and
discussions in our devroom are held under the guidelines set in the CoC and
we expect attendees, speakers, and volunteers to follow the CoC at all
times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about the
CoC or wish to have one of the devroom organizers review your presentation
slides or any other content for CoC compliance, please email us and we will
do our best to assist you.
Call for Volunteers
We are also looking for volunteers to help run the devroom. We need
assistance with helping speakers to record the presentation as well as
helping with streaming and chat moderation for the devroom. Please contact
devroom mailing list [2] for more information.
Questions?
If you have any questions about this devroom, please send your questions to
our devroom mailing list. You can also subscribe to the list to receive
updates about important dates, session announcements, and to connect with
other attendees.
See you all at FOSDEM!
[1] https://penta.fosdem.org/submission/FOSDEM22
[2] iaas-virt-devroom at lists.fosdem.org
2 years, 11 months
Upgraded to oVirt 4.4.9, still have vdsmd memory leak
by Chris Adams
I have seen vdsmd leak memory for years (I've been running oVirt since
version 3.5), but never been able to nail it down. I've upgraded a
cluster to oVirt 4.4.9 (reloading the hosts with CentOS 8-stream), and I
still see it happen. One host in the cluster, which has been up 8 days,
has vdsmd with 4.3 GB resident memory. On a couple of other hosts, it's
around half a gigabyte.
In the past, it seemed more likely to happen on the hosted engine hosts
and/or the SPM host... but the host with the 4.3 GB vdsmd is not either
of those.
I'm not sure what I do that would make my setup "special" compared to
others; I loaded a pretty minimal install of CentOS 8-stream, with the
only extra thing being I add the core parts of the Dell PowerEdge
OpenManage tools (so I can get remote SNMP hardware monitoring).
When I run "pmap $(pidof -x vdsmd)", the bulk of the RAM use is a single
anonymous block (which I'm guessing is just the python general memory
allocator).
I thought maybe the switch to CentOS 8 and python 3 might clear
something up, but obviously not. Any ideas?
--
Chris Adams <cma(a)cmadams.net>
2 years, 11 months
what is the best practices to delete vdisks in 100% used gluster stoargedoamin
by dhanaraj.ramesh@yahoo.com
Hi Team
In my one of three node Hyperconverged (gluster + ovirt) setup, one of the storage domain went offline from Datacenter due to 100% storage space utilization, but luckily it is still online in Gluster. how should I properly delete some of the vdisks in that offline stoarge domain from Gluster and bring back to online?
2 years, 11 months
broker.log file filling up
by Valerio Luccio
Hello all,
my broker.log files are suddenly filling up with lines that start with:
Listener::DEBUG::2021-11-24 09:13:20,795::listener::103::ovirt_hosted_engine_ha.broker.listener.Action.get_stats::(wrapper) Executing RPC handler get_stats with params (<ovirt_hosted_engine_ha.broker.listener.ActionsHandler object at 0x7fd3f0411b70>,)
Listener::DEBUG::2021-11-24 09:13:20,796::listener::145::ovirt_hosted_engine_ha.broker.listener._encode::(_encode) Encoded successfully: b'maintenance=
Followed by a long list of nulls (0\x00)
OS: CentOS 8
Ovirt version: 4.4
Any ideas ?
--
Valerio Luccio
High Performance Computing 10 Astor Place, Room 416D
New York University New York, NY 10003
"In an open world, who needs windows or gates ?"
2 years, 11 months