4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
1 year, 8 months
4.5.2 Create Additional Gluster Logical Volumes fails
by simon@justconnect.ie
Hi,
In 4.4 adding additional gluster volumes was a simple ansible task (or via cockpit).
With 4.5.2 I tried to add new volumes but the logic has changed/broken. Here's the error I am getting:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ********************************************************************************************************************************
failed: [bdtovirthcidmz02-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010442", "end": "2022-11-10 13:11:16.717772", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:11:16.707330", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz03-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010231", "end": "2022-11-10 13:12:35.607565", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:12:35.597334", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz01-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.011282", "end": "2022-11-10 13:13:24.336233", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:13:24.324951", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
The vg was created as part of the initial ansible build with logical volumes being added when required.
Any assistance would be greatly appreciated.
Kind regards
Simon
1 year, 9 months
Host Reboot Timeout of 10 Minutes
by Peter H
I'm working in a group that maintains a large oVirt setup based on 4.4.1
which works very well. We are afraid of upgrading and prefer setting up a
new installation and gradually enlist the hosts one by one into the new
installation.
We have tried 4.4.10 and 4.5.1 - 4.5.4 based on CentOS Stream 8, Rocky 8,
Alma Linux 9.1 with various problems. Worst was the problem that the rpm db
ended up in a catch-22 state.
Using Alma Linux 9.1 and current oVirt 4.5.4 seems promising as no rpm
problems are present after installation. We have only one nuisance left
which we have seen in all installation attempts we have made since 4.4.10.
When rebooting a host it takes 10 minutes before it's activated again. In
4.4.1 the hosts are activated a few seconds after they have booted up.
I have found the following in the engine log:
2023-01-24 23:01:57,564+01 INFO
[org.ovirt.engine.core.bll.SshHostRebootCommand]
(EE-ManagedThreadFactory-engine-Thread-1513) [2bb08d20] Waiting 600
seconds, for server to finish reboot process.
Our ansible playbooks for deployment times out and we could increase the
timeout but how come that this 10 minutes delay has been introduced?
Does a config file exist where this timeout can be set to a lower value?
BR
Peter H.
1 year, 10 months
Out-of-sync networks can only be detached
by Sakhi Hadebe
Hi,
I have a 3-node oVirt cluster. I have configured 2 logical networks:
ovirtmgmt and public. Public logical network is attached in only 2 nodes
and failing to attach on the 3rd node with the below error
Invalid operation, out-of-sync network 'public' can only be detached.
Please have been stuck on this for almost the whole day now. How do I fix
this error?
--
Regards,
Sakhi Hadebe
1 year, 10 months
{REQUEST} oVirt (including RHHI) as HCI in the future
by 樽井周久(KTCSP)
To whom it may concern
I have been using oVirt in an HCI environment for over 5 years.
The reason why I chose oVirt (most attractive point of oVirt for me) was
that it is easy to build an HCI environment with GUI deployment.
Now, I found out that the deployment by GUI has been deprecated (from
ovirt4.5) on your forum site,
and that all the contents related to HCI were deleted (dropped) in your
official installation guide site.
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CBDUBBKLTCW4MMWCXTRXNWDYPLP5CBUP/
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_command_line/index.html
It seems that oVirt will not be able to build HCI environment in the future.
So I'm very worried about whether I can continue to use oVirt as before.
I would appreciate it if you could tell me about your prospects
regarding oVirt as HCI.
For your reference below.
On 4.5.3 oVirt host, Gluster deployment was done with GUI,
but hosted-engine had to be deployed with CLI.
On 4.5.4 oVirt host, Guster deployment failed with GUI,
so subsequent hosted-engine deployment could not proceed.
('str object' has no attribute 'vgname')
Best regards,
Kanehisa Tarui
Tokyo, Japan
1 year, 10 months
Re: Self-hosted engine 4.5.0 deployment fails
by John
I would run the deploy again, wait until the engine is up and then from
the server you are deploying the engine on
# virsh list
obtain the virtual machine number from the above command, in the example
below we assume the number is 1
# virsh console 1
login as root
wait for all packages to finish updating:
tail -f /var/log/dnf.rpm.log
then:
# dnf downgrade postgresql-jdbc
once that's done the deployment of the engine should get past the
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is
up]
step.
On 03/05/2022 17:03, Mohamed Roushdy wrote:
>
> Hello,
>
> I’m deploying on Ovirt nodes v4.5.0.1, but it fails to deploy the
> self-hosted engine with the following errors in the web installer:
>
> [ INFO ] skipping: [localhost]
> [ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page
> is up]
> [ ERROR ] fatal: [localhost -> 192.168.222.197]: FAILED! =>
> {"attempts": 30, "changed": false, "connection": "close",
> "content_encoding": "identity", "content_length": "86",
> "content_type": "text/html; charset=UTF-8", "date": "Tue, 03 May 2022
> 15:57:20 GMT", "elapsed": 0, "msg": "Status code was 500 and not
> [200]: HTTP Error 500: Internal Server Error", "redirected": false,
> "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k
> mod_auth_gssapi/1.6.1", "status": 500, "url":
> http://localhost/ovirt-engine/services/health}
> [ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
> [ INFO ] changed: [localhost -> 192.168.222.197]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
> [ INFO ] changed: [localhost -> 192.168.222.197]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination
> directory path]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination
> directory]
> [ INFO ] changed: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
> [ INFO ] ok: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local
> appliance image]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to
> flush dirty buffers]
> [ INFO ] ok: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
> [ INFO ] changed: [localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of
> copied engine logs]
> [ INFO ] changed: [localhost -> localhost]
> [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about
> a failure]
> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg":
> "There was a failure deploying the engine on the local engine VM. The
> system may not be provisioned according to the playbook results:
> please check the logs for the issue, fix accordingly or re-deploy from
> scratch.\n"}
>
> Any idea if this is a bug or something? I tried to deploy for eight
> times with different configuration
>
> Thank y9ou,
>
>
> _______________________________________________
> Users mailing list --users(a)ovirt.org
> To unsubscribe send an email tousers-leave(a)ovirt.org
> Privacy Statement:https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:https://www.ovirt.org/community/about/community-guidelines/
> List Archives:https://lists.ovirt.org/archives/list/users@ovirt.org/message/XL...
1 year, 10 months
multiple disk snapshot_memory stay in locked state
by Nathanaël Blanchet
Hello,
After many (failed) snapshot attempts, I find many disk snapshot_memory
in locked state into the disks menu.
unlock utility returns anything... how can I erase those disks, knowing
that there is no listed snapshot for this vm
1 year, 11 months