/tmp/lvm.log keeps growing in Host
by kanehisa@ktcsp.net
I don't understand why lvm.log is placed in /tmp directory without rotation.
I noticed this fact when I got the following event notification every 2 hours.
EventID :24
Message :Critical, Low disk space. Host ovirt01 has less than 500 MB of free space left on: /tmp. Low disk space might cause an issue upgrading this host.
As a workaround, I added a log rotation setting to /tmp/lvm.log, but is this the correct way?
I should have understood the contents of the Python program below before asking the question,
but please forgive me because I am not very knowledgeable about Python.
# cat /usr/lib/python3.6/site-packages/blivet/devicelibs/lvm.py | grep lvm.log
config_string += "log {level=7 file=/tmp/lvm.log syslog=0}"
Thanks in advance!!
Further information is below
[root@ovirt01 ~]# cat /etc/os-release
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8.7.2206.0"
VARIANT="oVirt Node 4.5.4"
VARIANT_ID="ovirt-node"
PRETTY_NAME="oVirt Node 4.5.4"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://www.ovirt.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
PLATFORM_ID="platform:el8"
[root@ovirt01 ~]# uname -a
Linux ovirt01 4.18.0-408.el8.x86_64 #1 SMP Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@ovirt01 ~]# df -h | grep -E " /tmp|Filesystem"
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/onn_ovirt01-tmp 1014M 515M 499M 51% /tmp
[root@ovirt01 ~]# stat /tmp/lvm.log
File: /tmp/lvm.log
Size: 463915707 Blocks: 906088 IO Block: 4096 regular file
Device: fd0eh/64782d Inode: 137 Links: 1
Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)
Context: system_u:object_r:lvm_tmp_t:s0
Access: 2023-02-03 10:30:06.605936740 +0900
Modify: 2023-02-03 09:52:19.712301285 +0900
Change: 2023-02-03 09:52:19.712301285 +0900
Birth: 2023-01-16 01:06:02.768495837 +0900
1 year, 8 months
4.4.9 -> 4.4.10 Cannot start or migrate any VM (hotpluggable cpus requested exceeds the maximum cpus supported by KVM)
by Jillian Morgan
After upgrading the engine from 4.4.9 to 4.4.10, and then upgrading one
host, any attempt to migrate a VM to that host or start a VM on that host
results in the following error:
Number of hotpluggable cpus requested (16) exceeds the maximum cpus
supported by KVM (8)
While the version of qemu is the same across hosts, (
qemu-kvm-6.0.0-33.el8s.x86_64), I traced the difference to the upgraded
kernel on the new host. I have always run elrepo's kernel-ml on these hosts
to support bcache which RHEL's kernel doesn't support. The working hosts
still run kernel-ml-5.15.12. The upgraded host ran kernel-ml-5.17.0.
In case anyone else runs kernel-ml, have you run into this issue?
Does anyone know why KVM's KVM_CAP_MAX_VCPUS value is lowered on the new
kernel?
Does anyone know how to query the KVM capabilities from userspace without
writing a program leveraging kvm_ioctl()'s?
Related to this, it seems that ovirt and/or libvirtd always runs qmu-kvm
with an -smp argument of "maxcpus=16". This causes qemu's built-in check to
fail on the new kernel which is supporting max_vpus of 8.
Why does ovirt always request maxcpus=16?
And yes, before you say it, I know you're going to say that running
kernel-ml isn't supported.
--
Jillian Morgan (she/her) 🏳️⚧️
Systems & Networking Specialist
Primordial Software Group & I.T. Consultancy
https://www.primordial.ca
1 year, 8 months
4.5.2 Create Additional Gluster Logical Volumes fails
by simon@justconnect.ie
Hi,
In 4.4 adding additional gluster volumes was a simple ansible task (or via cockpit).
With 4.5.2 I tried to add new volumes but the logic has changed/broken. Here's the error I am getting:
TASK [gluster.infra/roles/backend_setup : Create volume groups] ********************************************************************************************************************************
failed: [bdtovirthcidmz02-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010442", "end": "2022-11-10 13:11:16.717772", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:11:16.707330", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz03-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.010231", "end": "2022-11-10 13:12:35.607565", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:12:35.597334", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
failed: [bdtovirthcidmz01-strg.mydomain.com] (item={'key': 'gluster_vg_sda', 'value': [{'vgname': 'gluster_vg_sda', 'pvname': '/dev/sda'}]}) => {"ansible_loop_var": "item", "changed": true, "cmd": ["vgcreate", "--dataalignment", "2048K", "-s", "2048K", "gluster_vg_sda", "/dev/sda"], "delta": "0:00:00.011282", "end": "2022-11-10 13:13:24.336233", "item": {"key": "gluster_vg_sda", "value": [{"pvname": "/dev/sda", "vgname": "gluster_vg_sda"}]}, "msg": "non-zero return code", "rc": 3, "start": "2022-11-10 13:13:24.324951", "stderr": " Configuration setting \"filter\" invalid. It's not part of any section.\n /dev/gluster_vg_sda: already exists in filesystem\n Run `vgcreate --help' for more information.", "stderr_lines": [" Configuration setting \"filter\" invalid. It's not part of any section.", " /dev/gluster_vg_sda: already exists in filesystem", " Run `vgcreate --help' for more information."], "stdout": "", "stdout_lines": []}
The vg was created as part of the initial ansible build with logical volumes being added when required.
Any assistance would be greatly appreciated.
Kind regards
Simon
1 year, 9 months
oVirt On Rocky 8.x - Upgrade To Rocky 9.1
by Matthew J Black
Hi All,
Sorry if this was mentioned previously (I obviously missed it if it was) but can we upgrade an oVirt (latest version) Host/Cluster and/or the oVirt Engine VM from Rocky Linux (RHEL) v8.6/8.7 to v9.1 (yet), and if so, what is / where can I find the procedure to do this - ie is there anything "special" that needs to be done because of oVirt, or can we just do a "simple" v8.x +> v9.1 upgrade?
Thanks in advance
Cheers
Dulux-Oz
1 year, 9 months
oVirt 4.4 hosted engine deploy fails - repository issues
by lars.stolpe@bvg.de
Hi,
I want to upgrade oVirt 4.3 to oVirt 4.4. Thus i have to reinstall one node to EL8 an deploy the engine with restore.
i get this error message at deploy:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Install oVirt Engine package]
[ ERROR ] fatal: [localhost -> 192.168.2.143]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot prepare internal mirrorlist: Curl error (56): Failure when receiving data from the peer for http://mirrorlist.centos.org/?release=8-stream&arch=x86_64&repo=storage-c... [Recv failure: Connection reset by peer]", "rc": 1, "results": []}
Since i do use our satellite server, this URL is not included in the repositories i provided. A repository named 'ovirt-4.4-centos-ceph-pacific' is deinitely provided and available.
How do i get the deploy to use the correct repositories?
I hope someone can help me out,
best regards
1 year, 9 months
Very long reboot times of "RH" hosts with oVirt installed
by Peter H
I was just wondering if anyone else is experiencing the following issue.
On both real physical machines and on VM when I have installed a RH
derivative like CentOS, CentOS Stream, Rocky Linux or Alma Linux the
reboot times are initially normal.
After installation of oVirt and all the dependencies the reboot
(shutdown) increases to anywhere between 3 and 5 minutes. During
shutdown a blinking cursor is visible in the upper left corner. If
sitting at the console
E.g. for
1 year, 9 months