virt-viewer - working alternative
by lars.stolpe@bvg.de
Hi all,
i tryed to use virt-viewer 11 (Windows) with oVirt 4.4 . I also tryed virt-viewer 10, 9, 8, and 7.
The older versions can't handle ISO residing in data domains, the latest version is crashing if try the media button.
Almost all links on the website are dead, no user guide, no prerequisites list...
Since virt-viewer is not working properly and seems to be abandoned, i'm looking for a working alternative.
Is there a working way to get a console of the virtual machines running in oVirt 4.4?
1 year, 11 months
label settings
by ziyi Liu
I have many users and each user manages different devices,
I made a label for the corresponding device and distributed the label to the corresponding user. The user needs to manually activate the label every time he logs in. How to activate the label by default?
1 year, 11 months
4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
1 year, 11 months
unable to connect to the graphic server with VNC graphics protocol
by Devin A. Bougie
Hello,
After upgrading our cluster to oVirt 4.5 on EL9 hosts, we had to switch from QXL and SPICE to VGA and VNC as EL9 dropped support for SPICE. However, we are now unable to view the console using Windows, Linux, or MacOS.
For example, after downloading and opening console.vv file with Remote Viewer (virt-viewer) on Linux, we see:
Unable to connect to the graphic server.
Any help connecting to a VM using a VGA/VNC console from Windows, Mac, or Linux would be greatly appreciated.
Many thanks,
Devin
1 year, 11 months
Ovirt CSI driver and volume snapshot
by Nathanaël Blanchet
Hello,
Given that currently ovirt CSI driver doesn't support volume snapshot and consequently openshift virtualization doesn't support live snaphost, is there a chance that openshift virtualization supports volume snapshot with a ovirt CSI storage backend in the future?
1 year, 11 months
Future of ovirt - end-of-life RHV
by Margit Meyer
What about the future of ovirt when RHV comes to end-of-life?
Is there a chance for further usage?
RedHat OpenShift will not be an option for us....
Margit
1 year, 11 months
Renewing certificates - How it affects running vms?
by markeczzz@gmail.com
Hi!
Simple question - does procedure to renew certificates affects running virtual machines?
I have a question regarding renewing certificates that are about to expire. I have a cluster with 5 hosts and a self-hosted engine. I have already updated the host certificates by logging in to each host, migrating virtual machines from that host to another, putting it into maintenance mode, and enrolling certificates via the web interface.
Now, I have a question regarding renewing Engine certificates (self signed). I know the procedure is to log in to the host where the hosted engine is running (Host1 in my case), put it into global maintenance mode via the command "hosted-engine --set-maintenance --mode=global," and then log in to the Engine and run the command "engine-setup --offline" to renew it.
My question is, will these two commands affect the running virtual machines on Host1 (where the hosted-engine is also hosted) and virtual machines on other hosts? Will they be shut down? Should I migrate virtual machines first from Host1 to some other host and then run the commands?
I have a lot of virtual machines that i can't shut down or restart, any way to do it without that?
Regards,
1 year, 11 months
Failure to deploy oVirt node with security profile (PCIDSS)
by Guillaume Pavese
Hi oVirt community
I tried with el8 & el9 oVirt Node 4.5.4 isos,
But in both cases, the installation failed when selecting the PCI-DSS
security profile. Please see screenshots attached
According to 4.5.0 release note this is a supported feature :
*BZ 2030226 [RFE] oVirt hypervisors should support running on hosts with
the PCI-DSS security profile applied*
*The oVirt Hypervisor is now capable of running on machine with PCI-DSS
security profile.*
https://bugzilla.redhat.com/show_bug.cgi?id=2030226
As the RFE says that deployment works, I guess this is a regression
somewhere between 4.5.0 & 4.5.4
Is there a chance that this bug gets fixed? Is it up to the community?
In the meantime, what can we do to deploy hosts with a PCI-DSS profile?
Best regards,
Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group
--
Ce message et toutes les pièces jointes (ci-après le “message”) sont
établis à l’intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a
sa destination, toute diffusion ou toute publication, totale ou partielle,
est interdite, sauf autorisation expresse. L’internet ne permettant pas
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales)
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse
ou il aurait été modifié. IT, ES, UK.
<https://interactiv-group.com/disclaimer.html>
1 year, 11 months
Issue with hyper-converged node and rhel9
by Robert Pouliot
Hi,
I have a 3 node cluster with gluster (2 rhel 8.7 and one rhel 9.1).
I updated one of my nodes to rhel9 and I'm unable to start any VM that's
running on gluster volumes on the rhel9 node or move a running VM on the
rhel9 node.
However, I'm able to start a node stored on a NFS node on the rhel9 node.
According to gluster (10.3) all 3 nodes are working fine.
I don't want to render my setup useless, that's why I didn't move the 2
other nodes.
wells = rhel9 node.
In the event log I'm getting the following error:
Failed to sync storage devices from host wells
VDSM wells command GetStorageDeviceListVDS failed: Internal JSON-RPC error
Here is a part of the vdsm.log:
2023-04-03 23:25:01,242-0400 INFO (vm/69f6480f) [vds] prepared volume
path: (clientIF:506)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] START
prepareImage(sdUUID='a874a247-d8de-4951-86e8-99aaeda1a510',
spUUID='595abf38-2c6d-11eb-9ba9-3ca82afed888',
imgUUID='69de5e2b-dfb7-4e47-93d0-65ed4f978017',
leafUUID='e66df35e-1254-48ef-bb83-ae8957fe8651', allowIllegal=False)
from=internal, task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:31)
2023-04-03 23:25:01,243-0400 INFO (vm/69f6480f) [vdsm.api] FINISH
prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) from=internal,
task_id=bf3e89e0-5d4e-47a1-8d86-3933c5004d25 (api:35)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.taskmanager.task]
(Task='bf3e89e0-5d4e-47a1-8d86-3933c5004d25') aborting: Task is aborted:
"value=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) abortedcode=309" (task:1165)
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [storage.dispatcher]
FINISH prepareImage error=Unknown pool id, pool not connected:
('595abf38-2c6d-11eb-9ba9-3ca82afed888',) (dispatcher:64)
2023-04-03 23:25:01,244-0400 ERROR (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') The vm start process failed
(vm:1001)
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 928, in
_startUnderlyingVm
self._run()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2769, in
_run
self._devices = self._make_devices()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2649, in
_make_devices
disk_objs = self._perform_host_local_adjustment()
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 2687, in
_perform_host_local_adjustment
self._preparePathsForDrives(disk_params)
File "/usr/lib/python3.9/site-packages/vdsm/virt/vm.py", line 1109, in
_preparePathsForDrives
drive['path'] = self.cif.prepareVolumePath(
File "/usr/lib/python3.9/site-packages/vdsm/clientIF.py", line 418, in
prepareVolumePath
raise vm.VolumeError(drive)
vdsm.virt.vm.VolumeError: Bad volume specification {'device': 'disk',
'type': 'disk', 'diskType': 'file', 'specParams': {}, 'alias':
'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'address': {'bus': '0',
'controller': '0', 'target': '0', 'type': 'drive', 'unit': '0'},
'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'guestName':
'/dev/sdb', 'imageID': '69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'poolID':
'595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False, 'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0, 'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path': '/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize': '0',
'truesize': '5331688448', 'apparentsize': '42949672960'}
2023-04-03 23:25:01,244-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Changed state to Down: Bad
volume specification {'device': 'disk', 'type': 'disk', 'diskType': 'file',
'specParams': {}, 'alias': 'ua-69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'address': {'bus': '0', 'controller': '0', 'target': '0', 'type': 'drive',
'unit': '0'}, 'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510',
'guestName': '/dev/sdb', 'imageID': '69de5e2b-dfb7-4e47-93d0-65ed4f978017',
'poolID': '595abf38-2c6d-11eb-9ba9-3ca82afed888', 'volumeID':
'e66df35e-1254-48ef-bb83-ae8957fe8651', 'managed': False, 'volumeChain':
[{'domainID': 'a874a247-d8de-4951-86e8-99aaeda1a510', 'imageID':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'leaseOffset': 0, 'leasePath':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651.lease',
'path': '/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'volumeID': 'e66df35e-1254-48ef-bb83-ae8957fe8651'}], 'path':
'/rhev/data-center/mnt/glusterSD/192.168.8.11:_GlusterVolLow/a874a247-d8de-4951-86e8-99aaeda1a510/images/69de5e2b-dfb7-4e47-93d0-65ed4f978017/e66df35e-1254-48ef-bb83-ae8957fe8651',
'discard': False, 'format': 'raw', 'propagateErrors': 'off', 'cache':
'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial':
'69de5e2b-dfb7-4e47-93d0-65ed4f978017', 'index': 0, 'reqsize': '0',
'truesize': '5331688448', 'apparentsize': '42949672960'} (code=1) (vm:1743)
2023-04-03 23:25:01,245-0400 INFO (vm/69f6480f) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Stopping connection
(guestagent:421)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [api.virt] START
destroy(gracefulAttempts=1) from=::ffff:192.168.2.24,56838,
vmId=69f6480f-5ee0-4210-84b9-58b4dbb4e423 (api:31)
2023-04-03 23:25:01,252-0400 INFO (jsonrpc/7) [virt.vm]
(vmId='69f6480f-5ee0-4210-84b9-58b4dbb4e423') Release VM resources (vm:5324)
1 year, 11 months
ovirt 4.6 compatibility with el8
by Nathanaël Blanchet
Hello community!
I noticed that rhel9 doesn't not support anymore ISP2532-based 8Gb
Fibre Channel HBA.
Given that all of my ovirt hosts are ISP2532-based, I d'like to know if
ovirt 4.6 will be still compatible with el8 or will only support el9.
--
Nathanaël Blanchet
Administrateur Systèmes et Réseaux
Service Informatique et REseau (SIRE)
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
1 year, 11 months