engine-config -s UserSessionTimeOutInterval=X problem
by marek
ovirt 4.5.4, standalone engine, centos 8 stream
[root@ovirt ~]# engine-config -g UserSessionTimeOutInterval
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
UserSessionTimeOutInterval: 30 version: general
[root@ovirt ~]# engine-config -s UserSessionTimeOutInterval=60
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Cannot set value 60 to key UserSessionTimeOutInterval.
any ideas where is the problem?
Marek
1 year, 7 months
engine-setup failing on 4.3.2 -> 4.3.3 fails during Engine schema refresh fail
by Edward Berger
I was trying to upgrade a hyperconverged oVirt hosted engine and failed in
the engine-setup command with these error and warnings.
...
[ INFO ] Creating/refreshing Engine database schema
[ ERROR ] schema.sh: FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/04_03_0830_add_foreign_key_to_image_transfers.sql
[ ERROR ] Failed to execute stage 'Misc configuration': Engine schema
refresh failed
...
[ INFO ] Yum Verify: 16/16: ovirt-engine-tools.noarch 0:4.3.3.5-1.el7 - e
[WARNING] Rollback of DWH database postponed to Stage "Clean up"
[ INFO ] Rolling back database schema
...
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ ERROR ] Execution of setup failed
Attaching engine-setup logfile.
1 year, 9 months
Unable to change the admin passsword on oVirt 4.5.2.5
by Ayansh Rocks
Hi All,
Any idea hot to change password of admin user on oVirt 4.5.2.5 ?
Below is not working -
[root@ovirt]# ovirt-aaa-jdbc-tool user password-reset admin
Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false
Password:
Reenter password:
updating user admin...
user updated successfully
[root@delhi-test-ovirtm-02 ~]#
Above shows successful but password not changed.
Thanks
1 year, 11 months
/tmp/lvm.log keeps growing in Host
by kanehisa@ktcsp.net
I don't understand why lvm.log is placed in /tmp directory without rotation.
I noticed this fact when I got the following event notification every 2 hours.
EventID :24
Message :Critical, Low disk space. Host ovirt01 has less than 500 MB of free space left on: /tmp. Low disk space might cause an issue upgrading this host.
As a workaround, I added a log rotation setting to /tmp/lvm.log, but is this the correct way?
I should have understood the contents of the Python program below before asking the question,
but please forgive me because I am not very knowledgeable about Python.
# cat /usr/lib/python3.6/site-packages/blivet/devicelibs/lvm.py | grep lvm.log
config_string += "log {level=7 file=/tmp/lvm.log syslog=0}"
Thanks in advance!!
Further information is below
[root@ovirt01 ~]# cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.7.2206.0"
VARIANT="oVirt Node 4.5.4"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.5.4"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
[root@ovirt01 ~]# uname -a
Linux ovirt01 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt01 ~]# df -h | grep -E " /tmp|Filesystem"
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/onn_ovirt01-tmp 1014M 515M 499M 51% /tmp
[root@ovirt01 ~]# stat /tmp/lvm.log
File: /tmp/lvm.log
Size: 463915707 Blocks: 906088 IO Block: 4096 regular file
Device: fd0eh/64782d Inode: 137 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:lvm_tmp_t:s0
Access: 2023-02-03 10:30:06.605936740 +0900
Modify: 2023-02-03 09:52:19.712301285 +0900
Change: 2023-02-03 09:52:19.712301285 +0900
Birth: 2023-01-16 01:06:02.768495837 +0900
2 years
4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
2 years
4.5.2 Create Additional Gluster Logical Volumes fails
by simon@justconnect.ie
Hi,
In 4.4 adding additional gluster volumes was a simple ansible task (or via cockpit).
With 4.5.2 I tried to add new volumes but the logic has changed/broken. Here's the error I am getting:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ********************************************************************************************************************************
failed: [bdtovirthcidmz02-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010442", "end": "2022-11-10 13:11:16.717772", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:11:16.707330", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz03-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010231", "end": "2022-11-10 13:12:35.607565", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:12:35.597334", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz01-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.011282", "end": "2022-11-10 13:13:24.336233", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:13:24.324951", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
The vg was created as part of the initial ansible build with logical volumes being added when required.
Any assistance would be greatly appreciated.
Kind regards
Simon
2 years
oVirt On Rocky 8.x - Upgrade To Rocky 9.1
by Matthew J Black
Hi All,
Sorry if this was mentioned previously (I obviously missed it if it was) but can we upgrade an oVirt (latest version) Host/Cluster and/or the oVirt Engine VM from Rocky Linux (RHEL) v8.6/8.7 to v9.1 (yet), and if so, what is / where can I find the procedure to do this - ie is there anything "special" that needs to be done because of oVirt, or can we just do a "simple" v8.x +> v9.1 upgrade?
Thanks in advance
Cheers
Dulux-Oz
2 years
oVirt 4.4 hosted engine deploy fails - repository issues
by lars.stolpe@bvg.de
Hi,
I want to upgrade oVirt 4.3 to oVirt 4.4. Thus i have to reinstall one node to EL8 an deploy the engine with restore.
i get this error message at deploy:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.2.143]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot prepare internal mirrorlist: Curl error (56): Failure when receiving data from the peer for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c... [Recv failure: Connection reset by peer]", "rc": 1, "results": []}
Since i do use our satellite server, this URL is not included in the repositories i provided. A repository named 'ovirt-4.4-centos-ceph-pacific' is deinitely provided and available.
How do i get the deploy to use the correct repositories?
I hope someone can help me out,
best regards
2 years, 1 month