Renewing certificates - How it affects running vms?
by markeczzz@gmail.com
Hi!
Simple question - does procedure to renew certificates affects running virtual machines?
I have a question regarding renewing certificates that are about to expire. I have a cluster with 5 hosts and a self-hosted engine. I have already updated the host certificates by logging in to each host, migrating virtual machines from that host to another, putting it into maintenance mode, and enrolling certificates via the web interface.
Now, I have a question regarding renewing Engine certificates (self signed). I know the procedure is to log in to the host where the hosted engine is running (Host1 in my case), put it into global maintenance mode via the command "hosted-engine --set-maintenance --mode=global," and then log in to the Engine and run the command "engine-setup --offline" to renew it.
My question is, will these two commands affect the running virtual machines on Host1 (where the hosted-engine is also hosted) and virtual machines on other hosts? Will they be shut down? Should I migrate virtual machines first from Host1 to some other host and then run the commands?
I have a lot of virtual machines that i can't shut down or restart, any way to do it without that?
Regards,
1 year, 7 months
Failure to deploy oVirt node with security profile (PCIDSS)
by Guillaume Pavese
Hi oVirt community
I tried with el8 & el9 oVirt Node 4.5.4 isos,
But in both cases, the installation failed when selecting the PCI-DSS
security profile. Please see screenshots attached
According to 4.5.0 release note this is a supported feature :
*BZ 2030226 [RFE] oVirt hypervisors should support running on hosts with
the PCI-DSS security profile applied*
*The oVirt Hypervisor is now capable of running on machine with PCI-DSS
security profile.*
https://bugzilla.redhat.com/show_bug.cgi?id=2030226
As the RFE says that deployment works, I guess this is a regression
somewhere between 4.5.0 & 4.5.4
Is there a chance that this bug gets fixed? Is it up to the community?
In the meantime, what can we do to deploy hosts with a PCI-DSS profile?
Best regards,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
1 year, 7 months
Issue with hyper-converged node and rhel9
by Robert Pouliot
Hi,
I have a 3 node cluster with gluster (2 rhel 8.7 and one rhel 9.1).
I updated one of my nodes to rhel9 and I'm unable to start any VM that's
running on gluster volumes on the rhel9 node or move a running VM on the
rhel9 node.
However, I'm able to start a node stored on a NFS node on the rhel9 node.
According to gluster (10.3) all 3 nodes are working fine.
I don't want to render my setup useless, that's why I didn't move the 2
other nodes.
wells = rhel9 node.
In the event log I'm getting the following error:
Failed to sync storage devices from host wells
VDSM wells command GetStorageDeviceListVDS failed: Internal JSON-RPC error
Here is a part of the vdsm.log:
2023-04-03 23:25:01,242-0400 INFO (vm/69f6480f) [vds] prepared volume
path: (clientIF:506)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] START
prepareImage(sdUUID='a874a247-d8de-4951-86e8-99aaeda1a510',
spUUID='595abf38-2c6d-11eb-9ba9-3ca82afed888',
imgUUID='69de5e2b-dfb7-4e47-93d0-65ed4f978017',
leafUUID='e66df35e-1254-48ef-bb83-ae8957fe8651', allowIllegal=False)
from=internal, task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:31)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] FINISH
prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) from=internal,
task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:35)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.taskmanager.task]
(Task='bf3e89e0-5d4e-47a1-8d86-3933c5004d25') aborting: Task is aborted:
"value=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) abortedcode=309" (task:1165)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.dispatcher]
FINISH prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) (dispatcher:64)
2023-04-03 23:25:01,244-0400 ERROR (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') The vm start process failed
(vm:1001)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 928, in
_startUnderlyingVm
self._run()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2769, in
_run
self._devices = self._make_devices()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2649, in
_make_devices
disk_objs = self._perform_host_local_adjustment()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2687, in
_perform_host_local_adjustment
self._preparePathsForDrives(disk_params)
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 1109, in
_preparePathsForDrives
drive['path'] = self.cif.prepareVolumePath(
File "/usr/lib/python3.9/site-packages/vdsm/clientIF.py", line 418, in
prepareVolumePath
raise vm.VolumeError(drive)
vdsm.virt.vm.VolumeError: Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'file', 'specParams': {}, 'alias':
'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'address': {'bus': '0',
'controller': '0', 'target': '0', 'type': 'drive', 'unit': '0'},
'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'guestName':
'/dev/sdb', 'imageID': '69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'poolID':
'595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False, 'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0, 'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path': '/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize': '0',
'truesize': '5331688448', 'apparentsize': '42949672960'}
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Changed state to Down: Bad
volume specification {'device': 'disk', 'type': 'disk', 'diskType': 'file',
'specParams': {}, 'alias': 'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'address': {'bus': '0', 'controller': '0', 'target': '0', 'type': 'drive',
'unit': '0'}, 'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510',
'guestName': '/dev/sdb', 'imageID': '69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'poolID': '595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False, 'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0, 'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path': '/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize': '0',
'truesize': '5331688448', 'apparentsize': '42949672960'} (code=1) (vm:1743)
2023-04-03 23:25:01,245-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Stopping connection
(guestagent:421)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:192.168.2.24,56838,
vmId=69f6480f-5ee0-4210-84b9-58b4dbb4e423 (api:31)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Release VM resources (vm:5324)
1 year, 7 months
ovirt 4.6 compatibility with el8
by Nathanaël Blanchet
Hello community!
I noticed that rhel9 doesn't not support anymore ISP2532-based 8Gb
Fibre Channel HBA.
Given that all of my ovirt hosts are ISP2532-based, I d'like to know if
ovirt 4.6 will be still compatible with el8 or will only support el9.
--
Nathanaël Blanchet
Administrateur Systèmes et Réseaux
Service Informatique et REseau (SIRE)
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
1 year, 7 months
oVirt hosted-engine deployment times out while "Wait for the host to be up"
by brwsergmslst@gmail.com
Hi all,
I am currently trying to deploy the hosted-engine without success.
Unfortunately I cannot see what I am missing here. It'd be nice if you could put an eye on it and help me out. Below is an excerpt of the logfile generated.
```
2023-03-22 20:34:10,417+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Get active list of active firewalld zones]
2023-03-22 20:34:12,222+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:13,527+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Configure libvirt firewalld zone]
2023-03-22 20:34:20,246+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:21,550+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Reload firewall-cmd]
2023-03-22 20:34:23,957+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:25,462+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Add host]
2023-03-22 20:34:27,469+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 changed: [localhost]
2023-03-22 20:34:28,672+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Include after_add_host tasks files]
2023-03-22 20:34:30,678+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 TASK [ovirt.ovirt.hosted_engine_setup : Let the user connect to the bootstrap engine VM to manually fix host configuration]
2023-03-22 20:34:31,882+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:32,987+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:33,890+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 skipping: [localhost]
2023-03-22 20:34:34,893+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:36,096+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:37,100+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Always revoke the SSO token]
2023-03-22 20:34:38,705+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:40,111+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
2023-03-22 20:34:41,015+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:42,020+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
2023-03-22 20:34:44,028+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2023-03-22 20:34:45,032+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
2023-03-22 20:56:33,746+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'ovirt_hosts': [{'href': '/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6', 'comment': '', '
id': '17ad8088-c9e9-433f-90b9-ce8023a625e6', 'name': 'vhost-tmp01.example.com', 'address': 'vhost-tmp01.example.com', 'affinity_labels': [], 'auto_numa_status': 'unknown', 'certificate': {'organization': 'example.com', 'subject': 'O=example.com,CN=vhost-tmp01.example.com'}, 'cluster': {'href': '/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d', 'id': '4effacb7-e5dd-4e52-86c9-90ebd2aafa0d'}, 'cpu
': {'speed': 0.0, 'topology': {}}, 'cpu_units': [], 'device_passthrough': {'enabled': False}, 'devices': [], 'external_network_provider_configurations': [], 'external_status': 'ok', 'hardware_information': {'supported_rng_sources': []}, 'h
ooks': [], 'katello_errata': [], 'kdump_status': 'unknown', 'ksm': {'enabled': False}, 'max_scheduling_memory': 0, 'memory': 0, 'network_attachments': [], 'nics': [], 'numa_nodes': [], 'numa_supported': False, 'os': {'custom_kernel_cmdline
': ''}, 'ovn_configured': False, 'permissions': [], 'port': 54321, 'power_management': {'automatic_pm_enabled': True, 'enabled': False, 'kdump_detection': True, 'pm_proxies': []}, 'protocol': 'stomp', 'reinstallation_required': False, 'se_linux': {}, 'spm': {'priority': 5, 'status': 'none'}, 'ssh': {'fingerprint': 'SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4', 'port': 22, 'public_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE='}, 'statistics': [], 'status': 'install_failed', 'storage_connection_extensions': [], 'summary': {'total': 0}, 'tags': [], 'transparent_huge_pages': {'enabled': False}, 'type': 'rhel', 'unmanaged_networks': [], 'update_available': False, 'vgpu_placement': 'consolidated'}], 'invocation': {'module_args': {'pattern': 'name=vhost-tmp01.example.com', 'fetch_nested': False, 'nested_attributes': [], 'follow': [], 'all_content': False, 'cluster
_version': None}}, '_ansible_no_log': None, 'attempts': 120}
2023-03-22 20:56:33,847+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"attempts": 120, "changed": false, "ovirt_hosts": [{"address": "vhost-tmp01.example.com", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "example.com", "subject": "O=example.com,CN=vhost-tmp01.example.com"}, "cluster": {"href": "/ovirt-engine/api/clusters/4effacb7-e5dd-4e52-86c9-90ebd2aafa0d", "id": "4effacb7-e5dd-4e52-86c9-90ebd2aafa0d"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "cpu_units": [], "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/17ad8088-c9e9-433f-90b9-ce8023a625e6", "id": "17ad8088-c9e9-433f-90b9-ce8023a625e6", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_
scheduling_memory": 0, "memory": 0, "name": "vhost-tmp01.example.com", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "ovn_configured": false, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "reinstallation_required": false, "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:IDhzMTO49OoNIHmLGnOIGwTn8LQB/lYrJdUnak144Q4", "port": 22, "public_key": "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNWbUlwbIiSliqc6OCFwG4w6/OaJb63JijJ1okaj6Y3gqNPO2XZWDTfwraIqm0S0SGlVk/g0oYcYIQ/0hU5Q+bE="}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "rhel", "unmanaged_networks": [], "update_available": false, "vgpu_placement": "
consolidated"}]}
2023-03-22 20:56:34,750+0100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
2023-03-22 20:56:35,655+0100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'msg': 'Host is not up, please check logs, perhaps also on the engine machine', '_ansible_no_log': None, 'changed': False}
2023-03-22 20:56:35,755+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
```
Thank you in advance.
Best regards
1 year, 7 months