Certificate replacing through engine-setup failed
by dominic.gwerder@bits.ch
Hi
I'm trying to replace the old certificate of the engine, so tha I can again access the management console. I've been reading thorugh a lot of articels and threads and the easiest solution, should be by running the engine-setup. I always get the following error:
[ INFO ] Creating CA: /etc/pki/ovirt-engine/ca.pem
[ ERROR ] Failed to execute stage 'Misc configuration': Command '/usr/share/ovirt-engine/bin/pki-enroll-pkcs12.sh' failed to execute
How can I fix this?
Thanks in advance for any help!
Kind regards,
Dominic
1 year
Gluster: Ideas for migration
by jonas@rabe.ch
Hello
I have to migrate the Gluster volumes from an old oVirt cluster to a newly built one. I looked into migration strategies, but everything that Red Hat recommends is related to replacing old bricks. In a testing environment I created two clusters and wanted to migrate one volume after the other. Unfortunately that fails because a node cannot be part of two clusters at the same time.
The next thing I see, is to recreate the volumes on the new cluster, then constantly rsync the files from the old cluster to the new one and at a specified point in time make the cut over where I stop the applicaiton, do a final rsync and remount the new volume under the old path.
Is there any other, nicer way I could accomplish migrating a volume from one Gluster cluster to another?
1 year
Engine on EL 9
by David Carvalho
Hello, good morning.
I’m using Oracle Linux and I intended to install a virtualization platffom with KVM and oracle VM. The Oracle documention only mentions Oracle Linux 8 and there are no oVirt repositories available for OL 9.
I visited ovirt.org site and at the download page it only mentions:
Engine:
* Red Hat Enterprise Linux 8.7 (or similar)
* CentOS Stream 8
I still had no reply at Oracle foruns. Will there be a possibility to use this with Oracle Linux 9 soon?
I have 3 servers to install and I also intend to use Gluster FS.
Thanks and regards.
Os melhores cumprimentos
David Alexandre M. de Carvalho
═══════════════════
Especialista de Informática
Departamento de Informática
Universidade da Beira Interior
1 year
VMs randomly pause due to unknown storage error, unable to resume
by Jon Sattelberger
Hi,
> VM xxx has been paused due to unknown storage error.
> Migration failed due to a failed validation: [Migrating a VM in paused status due to I/O error is not supported.] (VM: xxx, Source: yyy).
Up until recently oVirt 4.5.4 has been running fine on our RHEL 8 hypervisors with primarily (and a few appliances) Linux guests. We started to add Windows 2019 VMs to the cluster with the guest agent installed. They seem to run fine at first, but some of the Windows VMs may randomly pause due to an unknown storage error. The VM cannot be resumed through the UI or virsh. The paused VM cannot be migrated to another Hypervisor. The GlusterFS storage volumes seem fine. Resetting the VM seems to work, but eventually it'll become paused again. The only thing that came to my mind is the virtual hard disks are thin provisioned. Is a preallocated disk necessary for Windows VMs? Any helpful hints on where to look next is greatly appreciated.
Thank you,
Jon
1 year
oVirt support for NVMe-oF devices as storage domain.
by lxtakc@gmail.com
Hello,
We are interested in using oVirt with NVMe-oF devices. Is there an option to connect them as Storage Domain? What should be done to achieve it?
Managed Block Storage (cinderlib) is not an option for us, because it doesn't support backups (incremental backups with e.g. Veeam or others).
What should be done on oVirt side to support NVMe-oF as Storage Domain (like iSCSI, FCP, NFS)?
Thanks.
1 year
Re: how to renew expired ovirt node vdsm cert manually ?
by Dominic Gwerder
Hi
I followed all the instructions but I now get the following error on my host:
ovn-controller
ovs|08734|stream_ssl|ERR|Private key must be configured to use SSL
PRIORITY
3
SYSLOG_FACILITY
3
SYSLOG_IDENTIFIER
ovn-controller
_BOOT_ID
52c3fc529b1644dbb789384be75b0cab
_CAP_EFFECTIVE
7c00
_CMDLINE
ovn-controller unix:/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --private-key=/etc/pki/vdsm/keys/vdsmkey.pem --certificate=/etc/pki/vdsm/certs/vdsmcert.pem --ca-cert=/etc/pki/vdsm/certs/cacert.pem --user openvswitch:openvswitch --no-chdir --log-file=/var/log/ovn/ovn-controller.log --pidfile=/run/ovn/ovn-controller.pid --detach
_COMM
ovn-controller
_EXE
/usr/bin/ovn-controller
Do you have a solution for this? Does the user openvswitch need access to the keys?
Thanks in advance.
Kind regards,
Dominic Gwerder
bits ag
1 year
What is the impact of doubling default NumOfPciExpressPorts value?
by ivan.lezhnjov.iv@gmail.com
Hi!
We've been trying to understand what the effect of increasing NumOfPciExpressPorts might be in terms of system resources consumption and overall system performance of the oVirt Hosts.
In this specific scenario, we're running an oVirt deployment with about 100 hosts and thousands of VMs.
oVirt allocates 16 PCIe ports by default (or is it QEMU defaults?) and we would like to increase that number to about 30.
However, only very few VMs are actually gonna use those extra PCIe ports in terms of having additional devices (like NICs).
So, basically, there will be a lot of VMs that have about twice as many PCIe ports available to them than now but not really using them (as in having actual additional devices created and associated with the VMs).
How is that going to impact physical servers that are running the VMs? Any potential problems for the VMs themselves?
Ivan
1 year
Unable to start HostedEngine
by Devin A. Bougie
After a failed attempt at migrating our HostedEngine to a new iSCSI storage domain, we're unable to restart the original HostedEngine.
Please see below for some details, and let me know what more information I can provide. "Lnxvirt07" was the Host used to attempt the migration. Any help would be greatly appreciated.
Many thanks,
Devin
------
[root@lnxvirt01 ~]# tail -n 5 /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2023-11-01 12:29:53,514::state_decorators::51::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) Global maintenance detected
MainThread::INFO::2023-11-01 12:29:54,151::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:05ef954f-d06d-401c-85ec-5992e2afbe7d, volUUID:d2860f1d-19cf-4084-8a7e-d97880c32431
MainThread::INFO::2023-11-01 12:29:54,530::ovf_store::117::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(scan) Found OVF_STORE: imgUUID:a375a35b-7a87-4df4-8d29-a5ba371fee85, volUUID:ef8b3dae-bcae-4d58-bea8-cf1a34872267
MainThread::ERROR::2023-11-01 12:29:54,813::config_ovf::65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config.vm::(_get_vm_conf_content_from_ovf_store) Failed extracting VM OVF from the OVF_STORE volume, falling back to initial vm.conf
MainThread::INFO::2023-11-01 12:29:54,843::hosted_engine::531::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitoring_loop) Current state GlobalMaintenance (score: 3400)
[root@lnxvirt01 ~]# hosted-engine --vm-start
Command VM.getStats with args {'vmID': 'e6370d8f-c083-4f28-83d0-a232d693e07a'} failed:
(code=1, message=Virtual machine does not exist: {'vmId': 'e6370d8f-c083-4f28-83d0-a232d693e07a'})
Command VM.create with args {'vmID': 'e6370d8f-c083-4f28-83d0-a232d693e07a', 'vmParams': {'vmId': 'e6370d8f-c083-4f28-83d0-a232d693e07a', 'memSize': '16384', 'display': 'vnc', 'vmName': 'HostedEngine', 'smp': '4', 'maxVCpus': '40', 'cpuType': 'Haswell-noTSX', 'emulatedMachine': 'pc', 'devices': [{'index': '2', 'iface': 'ide', 'address': {'controller': '0', 'target': '0', 'unit': '0', 'bus': '1', 'type': 'drive'}, 'specParams': {}, 'readonly': 'true', 'deviceId': 'b3e2f40a-e28d-493c-af50-c1193fb9dc97', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID': '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'imageID': '94628710-cf73-4589-bd84-e58f741a4d5f', 'specParams': {}, 'readonly': 'false', 'domainID': '555ad71c-1a4e-42b3-af8c-db39d9b9df67', 'optional': 'false', 'deviceId': '6afa3b19-7a1a-4e5c-a681-eed756d316e9', 'address': {'bus': '0x00', 'slot': '0x06', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk', 'bootOrder': '1'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:3b:3f:14', 'linkActive': 'true', 'network': 'ovirtmgmt', 'specParams': {}, 'deviceId': '002afd06-9649-4ac5-a5e8-1a4945c3c136', 'address': {'bus': '0x00', 'slot': '0x03', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'type': 'console'}, {'device': 'vga', 'alias': 'video0', 'type': 'video'}, {'device': 'vnc', 'type': 'graphics'}, {'device': 'virtio', 'specParams': {'source': 'urandom'}, 'model': 'virtio', 'type': 'rng'}]}} failed:
(code=100, message=General Exception: ("'xml'",))
VM failed to launch
[root@lnxvirt01 ~]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=lnxvirt-engine.classe.cornell.edu
vm_disk_id=94628710-cf73-4589-bd84-e58f741a4d5f
vm_disk_vol_id=6afa3b19-7a1a-4e5c-a681-eed756d316e9
vmid=e6370d8f-c083-4f28-83d0-a232d693e07a
storage=192.168.56.50,192.168.56.51,192.168.56.52,192.168.56.53
nfs_version=
mnt_options=
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=8
console=vnc
domainType=iscsi
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=555ad71c-1a4e-42b3-af8c-db39d9b9df67
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
vdsm_use_ssl=true
gateway=192.168.55.1
bridge=ovirtmgmt
network_test=dns
tcp_t_address=
tcp_t_port=
metadata_volume_UUID=2bf987a2-ab81-454c-9fc7-dc7ec8945fd9
metadata_image_UUID=35429b63-16ca-417a-b87a-d232463bf6a3
lockspace_volume_UUID=b0d09780-2047-433c-812d-10ba0beff788
lockspace_image_UUID=8ccb878d-9938-43c8-908b-e1b416fe991c
conf_volume_UUID=0b40ac60-499e-4ff1-83d0-fc578f1af3dc
conf_image_UUID=551d4fe5-a9f7-4ba1-9951-87418362b434
# The following are used only for iSCSI storage
iqn=iqn.2002-10.com.infortrend:raid.uid58207.001
portal=1
user=
password=
port=3260,3260,3260,3260
[root@lnxvirt01 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host lnxvirt06.classe.cornell.edu (id: 1) status ==--
Host ID : 1
Host timestamp : 3718817
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt06.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : 233a1425
conf_on_shared_storage : True
local_conf_timestamp : 3718818
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3718817 (Wed Nov 1 12:26:35 2023)
host-id=1
score=3400
vm_conf_refresh_time=3718818 (Wed Nov 1 12:26:37 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host lnxvirt05.classe.cornell.edu (id: 2) status ==--
Host ID : 2
Host timestamp : 3719461
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt05.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : b3c81abe
conf_on_shared_storage : True
local_conf_timestamp : 3719462
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3719461 (Wed Nov 1 12:26:41 2023)
host-id=2
score=3400
vm_conf_refresh_time=3719462 (Wed Nov 1 12:26:42 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host lnxvirt04.classe.cornell.edu (id: 3) status ==--
Host ID : 3
Host timestamp : 3718684
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt04.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : 03a57b14
conf_on_shared_storage : True
local_conf_timestamp : 3718686
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3718684 (Wed Nov 1 12:26:41 2023)
host-id=3
score=3400
vm_conf_refresh_time=3718686 (Wed Nov 1 12:26:43 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host lnxvirt03.classe.cornell.edu (id: 4) status ==--
Host ID : 4
Host timestamp : 3719430
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt03.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : adb1aad2
conf_on_shared_storage : True
local_conf_timestamp : 3719432
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3719430 (Wed Nov 1 12:26:35 2023)
host-id=4
score=3400
vm_conf_refresh_time=3719432 (Wed Nov 1 12:26:36 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host lnxvirt02.classe.cornell.edu (id: 5) status ==--
Host ID : 5
Host timestamp : 3719408
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt02.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : 1996a067
conf_on_shared_storage : True
local_conf_timestamp : 3719410
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=3719408 (Wed Nov 1 12:26:39 2023)
host-id=5
score=3400
vm_conf_refresh_time=3719410 (Wed Nov 1 12:26:41 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host lnxvirt07.classe.cornell.edu (id: 7) status ==--
Host ID : 7
Host timestamp : 495392
Score : 0
Engine status : unknown stale-data
Hostname : lnxvirt07.classe.cornell.edu
Local maintenance : False
stopped : True
crc32 : 2572e907
conf_on_shared_storage : True
local_conf_timestamp : 495352
Status up-to-date : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=495392 (Tue Oct 31 10:20:12 2023)
host-id=7
score=0
vm_conf_refresh_time=495352 (Tue Oct 31 10:19:33 2023)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
--== Host lnxvirt01.classe.cornell.edu (id: 8) status ==--
Host ID : 8
Host timestamp : 1729103
Score : 3400
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : lnxvirt01.classe.cornell.edu
Local maintenance : False
stopped : False
crc32 : 2e57e99d
conf_on_shared_storage : True
local_conf_timestamp : 1729104
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1729103 (Wed Nov 1 12:26:31 2023)
host-id=8
score=3400
vm_conf_refresh_time=1729104 (Wed Nov 1 12:26:33 2023)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
1 year
Hosted-engine restore failing when migrating to new storage domain
by Devin A. Bougie
Hello,
We have a functioning oVirt 4.5.4 cluster running on fully-updated EL9.2 hosts. We are trying to migrate the self-hosted engine to a new iSCSI storage domain using the existing hosts, following the documented procedure:
- set the cluster into global maintenance mode
- backup the engine using "engine-backup --scope=all --mode=backup --file=backup.bck --log=backuplog.log"
- shutdown the engine
- restore the engine using "hosted-engine --deploy --4 --restore-from-file=backup.bck"
This almost works, but fails with the attached log file. Any help or suggestions would be greatly appreciated, including alternate procedures for migrating a self-hosted engine from one domain to another.
Many thanks,
Devin
1 year
Call for participation: Virtualization and Cloud infrastructure Room at FOSDEM 2024
by Piotr Kliczewski
We are excited to announce that the call for proposals is now open for the
Virtualization and Cloud infrastructure devroom at the upcoming FOSDEM
2024, to be hosted on February 3rd 2024.
This devroom is a collaborative effort, and is organized by dedicated folks
from projects such as OpenStack, Xen Project, KubeVirt, QEMU, KVM, and
Foreman. We would like to invite all those who are involved in these fields
to submit your proposals by December 8th, 2023.
About the Devroom
The Virtualization & IaaS devroom will feature session topics such as open
source hypervisors or virtual machine managers such as Xen Project, KVM,
bhyve and VirtualBox as well as Infrastructure-as-a-Service projects such
as KubeVirt, Apache CloudStack, OpenStack, QEMU and OpenNebula.
This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with multiple
hypervisors; hyperconverged deployments; and scaling across hundreds or
thousands of servers.
Presentations in this devroom will be aimed at developers working on these
platforms who are looking to collaborate and improve shared infrastructure
or solve common problems. We seek topics that encourage dialog between
projects and continued work post-FOSDEM.
Important Dates
Submission deadline: 8th December 2023
Acceptance notifications: 10th December 2023
Final schedule announcement: 15th December 2023
Devroom: 3rd February 2024
Submit Your Proposal
All submissions must be made via the Pretalx event planning site[1]. It is
a new submission system so you will need to create an account. If you
submitted proposals for FOSDEM in previous years, you won’t be able to use
your existing account.
During submission please make sure to select Virtualization and Cloud
infrastructure from the Track list. Please fill out all the required
fields, and provide a meaningful abstract and description of your proposed
session.
Submission Guidelines
We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.
All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q&A.
All presentations will be recorded and made available under Creative
Commons licenses. In the Submission notes field, please indicate that you
agree that your presentation will be licensed under the CC-By-SA-4.0 or
CC-By-4.0 license and that you agree to have your presentation recorded.
For example:
"If my presentation is accepted for FOSDEM, I hereby agree to license all
recordings, slides, and other associated materials under the Creative
Commons Attribution Share-Alike 4.0 International License.
Sincerely,
<NAME>."
In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend FOSDEM and deliver your presentation.
We will not consider proposals from prospective speakers who are unsure
whether they will be able to secure funds for travel and lodging to attend
FOSDEM. (Sadly, we are not able to offer travel funding for prospective
speakers.)
Code of Conduct
Following the release of the updated code of conduct for FOSDEM, we'd like
to remind all speakers and attendees that all of the presentations and
discussions in our devroom are held under the guidelines set in the CoC and
we expect attendees, speakers, and volunteers to follow the CoC at all
times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about the
CoC or wish to have one of the devroom organizers review your presentation
slides or any other content for CoC compliance, please email us and we will
do our best to assist you.
Questions?
If you have any questions about this devroom, please send your questions to
our devroom mailing list. You can also subscribe to the list to receive
updates about important dates, session announcements, and to connect with
other attendees.
See you all at FOSDEM!
[1] <https://penta.fosdem.org/submission/FOSDEM17>
https://pretalx.fosdem.org/fosdem-2024/cfp
[2] virtualization-devroom-manager at fosdem.org
1 year