boot from cdrom & error code 0005
by edp@maddalena.it
Hi.
I have created a new storage domain (data domain, storage type nfs) to use it to upload iso images.
I have so uploaded a new iso and then attach the iso to a new vm.
But when I try to boot the vm I obtain this error:
booting from dvd/cd...
boot failed: could not read from cdrom (code 0005)
no bootable device
The iso file has been uploaded with success in the data storage domain and so the vm lets my attach the iso to the vm in the boot settings.
Can you help me?
Thank you
6 months, 3 weeks
VM Migration Failed
by KSNull Zero
Running oVirt 4.4.5
VM cannot migrate between hosts.
vdsm.log contains the following error:
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt URI qemu+tls://ovhost01.local/system: authentication failed: Failed to verify peer's certificate
Certificates on hosts was renewed some time ago. How this issue can be fixed ?
Thank you.
8 months, 1 week
How to re-enroll (or renew) host certificates for a single-host hosted-engine deployment?
by Derek Atkins
Hi,
I've got a single-host hosted-engine deployment that I originally
installed with 4.0 and have upgraded over the years to 4.3.10. I and some
of my users have upgraded remote-viewer and now I get an error when I try
to view the console of my VMs:
(remote-viewer:8252): Spice-WARNING **: 11:30:41.806:
../subprojects/spice-common/common/ssl_verify.c:477:openssl_verify: Error
in server certificate verification: CA signature digest algorithm too weak
(num=68:depth0:/O=<My Org Name>/CN=<Host's Name>)
I am 99.99% sure this is because the old certs use SHA1.
I reran engine-setup on the engine and it asked me if I wanted to renew
the PKI, and I answered yes. This replaced many[1] of the certificates in
/etc/pki/ovirt-engine/certs on the engine, but it did not update the
Host's certificate.
All the documentation I've seen says that to refresh this certificate I
need to put the host into maintenance mode and then re-enroll.. However I
cannot do that, because this is a single-host system so I cannot put the
host in local mode -- there is no place to migrate the VMs (let alone the
Engine VM).
So.... Is there a command-line way to re-enroll manually and update the
host certs? Or some other way to get all the leftover certs renewed?
Thanks,
-derek
[1] Not only did it not update the Host's cert, it did not update any of
the vmconsole-proxy certs, nor the certs in /etc/pki/ovirt-vmconsole/, and
obviously nothing in /etc/pki/ on the host itself.
--
Derek Atkins 617-623-3745
derek(a)ihtfp.com www.ihtfp.com
Computer and Internet Security Consultant
8 months, 2 weeks
4.5.4 with Ceph only storage
by Maurice Burrows
Hey ... A long story short ... I have an existing Red Hat Virt / Gluster hyperconverged solution that I am moving away from.
I have an existing Ceph cluster that I primarily use for OpenStack and a small requirement for S3 via RGW.
I'm planning to build a new oVirt 4.5.4 cluster on RHEL9 using Ceph for all storage requirements. I've read many online articles on oVirt and Ceph, and they all seem to use the Ceph iSCSI gateway, which is now in maintenance, so I'm not real keen to commit to iSCSI.
So my question is, IS there any reason I cannot use CephFS for both hosted-engine and as a data storage domain?
I'm currently running Ceph Pacific FWIW.
Cheers
8 months, 3 weeks
New host cannot connect to master domain
by ovirt@kirschke.de
Hi,
i just installed a new host, which cannot connect to the master domain. The domain is an iscsi device on a nas, and ovirt is actually fine with it, other hosts have no problem. I'sure , I#m just overlooking something.
What I see in the log is:
024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepoolmemorybackend] new storage pool master version 302 and domains map {'6f018fbd-de93-4c56-880d-8ede2aad2674': 'Active', '2c870e06-6c70-45ec-b665-ce29408c8a8e': 'Active', 'a154
96dc-c241-4658-af9d-0dfe11783916': 'Active', '41012bfb-b802-4092-b699-7f5284d95c8e': 'Active'} (spbackends:417)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepool] updating pool 5836aaac-0030-0064-024d-0000000002e4 backend from type NoneType instance 0x7f10d667fb70 to type StoragePoolMemoryBackend instance 0x7f10702d7408 (sp:149)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagepool] Connect host #2 to the storage pool 5836aaac-0030-0064-024d-0000000002e4 with master domain: a15496dc-c241-4658-af9d-0dfe11783916 (ver = 302) (sp:699)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Invalidating storage domain cache (sdc:57)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Clearing storage domain cache (sdc:182)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Refreshing storage domain cache (resize=True) (sdc:63)
2024-04-16 11:10:06,320-0400 INFO (jsonrpc/3) [storage.iscsi] Scanning iSCSI devices (iscsi:445)
2024-04-16 11:10:06,456-0400 INFO (jsonrpc/3) [storage.iscsi] Scanning iSCSI devices: 0.14 seconds (utils:373)
2024-04-16 11:10:06,456-0400 INFO (jsonrpc/3) [storage.hba] Scanning FC devices (hba:42)
2024-04-16 11:10:06,481-0400 INFO (jsonrpc/3) [storage.hba] Scanning FC devices: 0.03 seconds (utils:373)
2024-04-16 11:10:06,481-0400 INFO (jsonrpc/3) [storage.multipath] Waiting until multipathd is ready (multipath:95)
2024-04-16 11:10:08,498-0400 INFO (jsonrpc/3) [storage.multipath] Waited 2.02 seconds for multipathd (tries=2, ready=2) (multipath:122)
2024-04-16 11:10:08,498-0400 INFO (jsonrpc/3) [storage.multipath] Resizing multipath devices (multipath:223)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.multipath] Resizing multipath devices: 0.00 seconds (utils:373)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Refreshing storage domain cache: 2.18 seconds (utils:373)
2024-04-16 11:10:08,499-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Looking up domain a15496dc-c241-4658-af9d-0dfe11783916 (sdc:154)
2024-04-16 11:10:08,536-0400 INFO (jsonrpc/3) [storage.storagedomaincache] Looking up domain a15496dc-c241-4658-af9d-0dfe11783916: 0.04 seconds (utils:373)
2024-04-16 11:10:08,537-0400 INFO (jsonrpc/3) [vdsm.api] FINISH connectStoragePool error=Cannot find master domain: 'spUUID=5836aaac-0030-0064-024d-0000000002e4, msdUUID=a15496dc-c241-4658-af9d-0dfe11783916' from=::ffff:10.2.0.4,44914, flo
w_id=44d9e674, task_id=32f0ea2a-044b-4a81-a12e-a269e59a802b (api:35)
2024-04-16 11:10:08,537-0400 ERROR (jsonrpc/3) [storage.taskmanager.task] (Task='32f0ea2a-044b-4a81-a12e-a269e59a802b') Unexpected error (task:860)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/sp.py", line 1550, in setMasterDomain
domain = sdCache.produce(msdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 98, in produce
domain.getRealDomain()
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 34, in getRealDomain
return self._cache._realProduce(self._sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 122, in _realProduce
domain = self._findDomain(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 139, in _findDomain
return findMethod(sdUUID)
File "/usr/lib/python3.6/site-packages/vdsm/storage/sdc.py", line 169, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
vdsm.storage.exception.StorageDomainDoesNotExist: Storage domain does not exist: ('a15496dc-c241-4658-af9d-0dfe11783916',)
During handling of the above exception, another exception occurred:
....
The domain a15496dc-c241-4658-af9d-0dfe11783916 definitely exist, and works on other systems.
THe new host can acces the iscsi targets, and i am able to log into them.
Any one who knows, which blinder I have to remove, so I see the actual problem?
Thanks and best regards
Steffen
8 months, 3 weeks
i can't access console with noVNC or VNC client(console.vv)
by z84614242@163.com
i installed the ovirt 4.5 engine on centos stream 9 and add a ovirt node(ovirt node 4.5 iso) to this engine. i am going to run my vm on this node. i follow the instruction to create the data center, the cluster, the storage domain, upload the image. everything is fine. and after i create a vm with ubuntu image attach, i found that i can't visit the console. when i using the noVNC, it says "Something went wrong, connection is closed", when i visit vnc with virt-viewver, is says "Failed to complete handshake Error in the pull function". i try to change the console type to Bochs one and it appear the same. i change to QXL mode and the vm can't start any more. i check the log, it says "unsupported configuration: domain configuration does not support video model 'qxl'".
so now i can't visit my vm by anyway. i deploy the engine follow the official instruction and keep mostly option default but why still have this issue. why the noVNC says "Something went wrong" instead of telling me what is actually wrong
10 months
Cannot create bond
by dcaamano@apexutilities.ca
Hello,
I recently deploy two KVM hosts and one KVM engine. things seem to be working good so far; however, I cannot create bonds in my KVM hosts. I have 4 interfaces, two enabled (cable connected) and two disabled (cable unplugged, not needed at the moment). The ovirmgmt network is running on one of the active interfaces.
I have done the following:
Create LACP bond with both connected interfaces.
Create Load-balancing XOR bond with both connected interfaces.
Create LACP bond with both unplugged interfaces.
Create Load-balancing XOR bond with both unplugged interfaces.
Create LACP bond with one connected and one unplugged interface.
Create Load-balancing XOR bond with one connected and one interfaces.
Remove server from KVM engine, create bond at the host level using cockpit, try to add the host back into KVM cluster.
None of the options i have tried successfully create the bond in the KVM cluster.
I will appreciate your assistance in getting this to work.
Environment details
OS on all servers: Oracle Linux 8.9 with kernel 5.15.0-205.149.5.1.el8uek.x86_64
KVM engine: running on a virtual machine in vCenter
KVM hosts: physical servers
Software version: 4.5.4-1.0.31.el8
Thank you
10 months, 1 week
ARM64 Host Failed to Add to Cluster
by limajasond@yahoo.com
Hi,
I configured a Rocky Linux 9 ARM64 Raspberry Pi 4B as an oVirt Host. When adding to a new Cluster under an established Datacenter it fails configuring OVN, I did make sure ovirt-provider-ovn and ovirt-provider-ovn-driver was installed.
The error I found after digging into the ansible yml file on the Hosted Engine and the Python py related to configuring OVN. Ultimately the error is below:
[root@ovirtnode3 ~]# vdsm-tool ovn-config 192.168.1.50 ovirtnode3.test.com
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line 117, in get_network
return networks[net_name]
KeyError: 'ovirtnode3.test.com'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 195, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line 63, in ovn_config
ip_address = get_ip_addr(get_network(network_caps(), net_name))
File "/usr/lib/python3.9/site-packages/vdsm/tool/ovn_config.py", line 119, in get_network
raise NetworkNotFoundError(net_name)
vdsm.tool.ovn_config.NetworkNotFoundError: ovirtnode3.test.com
Running vdsm-tool list-nets displays nothing. Something is off in the script, but I admit I am not sure how.
I know oVirt says Aarch64 is experimental, but this is unusable if vdsm-tool can't even gather/list the networks or config the OVN.
10 months, 1 week
Re: Ovirt Hyperconverged Storage
by Thomas Hoberg
And I might have misread where your problems actually are...
Because oVirt was born on SAN but tries to be storage agnostic, it creates its own overlay abstraction, a block layer that is then managed within oVirt even when you use NFS or GlusterFS underneath.
"The ISO domain" has actually been deprecated and ISO images can be put into any domain type (e.g. also data).
But they still have to be uploaded to that domain via the management engine GUI, you can't just copy the ISO images somewhere within the files and directories oVirt might create and expect them to be visible to the GUI or the VMs.
10 months, 2 weeks
Re: Ovirt Hyperconverged Storage
by Thomas Hoberg
Hi Tim,
HA, HCI and failover either require or at least benefit from consistent storage.
The original NFS reduce the risk of inconsistency to single files, Gluster puts the onus of consistency mostly the clients and I guess Ceph is similar.
iSCSI has been described as a bit the worst of everything in storage and I can appreciate that view in a HA scenario because it doesn't help with consistency.
Of course, its block layer abstraction isn't really that different from SAN or NFS 4.x object storage.
I last experimented with iSCSI 20 years ago, mostly because it seemed so great for booting even less cooperative diskless hosts than Sun workstations over the network.
But if I had a reliable TrueNAS and wanted to run oVirt, I'd just go with NFS.
AFAIK oVirt was born on SAN but with SAN outside of oVirt's purvue. So if your iSCSI setup behaves like a SAN, oVirt should be easy to get going, but I've never tried myself.
And the lack of tried and tested tutorials or videos from 20 different sources might be the reason oVirt didn't quite push out everybody else.
10 months, 2 weeks