About the vm memory limit
by Tommy Sway
I would like to ask if there is any limit on the memory size of virtual
machines, or performance curve or something like that?
As long as there is memory on the physical machine, the more virtual
machines the better?
In our usage scenario, there are many virtual machines with databases, and
their memory varies greatly.
For some virtual machines, 4G memory is enough, while for some virtual
machines, 64GB memory is needed.
I want to know what is the best use of memory for a virtual machine, since
the virtual machine is just a QEMU emulation process on a physical machine,
and I worry that it is not using as much memory as a physical machine.
Understand this so that we can develop guidelines for optimal memory usage
scenarios for virtual machines.
Thank you!
2 years, 6 months
about the power management of the hosts
by Tommy Sway
Everybody, hi:
I would like to ask, after the configuration of power management on the
host, under what circumstances will work?
That is, when does the engine send a request to the IPMI module to restart
the power supply?
Is it necessary to configure power management modules in the production
environment?
Are there any risks?
Thanks!
2 years, 6 months
Sparse VMs from Templates - Storage issues
by Shantur Rathore
Hi all,
I have a setup as detailed below
- iSCSI Storage Domain
- Template with Thin QCOW2 disk
- Multiple VMs from Template with Thin disk
oVirt Node 4.4.4
When the VMs boots up it downloads some data to it and that leads to
increase in volume size.
I see that every few seconds the VM gets paused with
"VM X has been paused due to no Storage space error."
and then after few seconds
"VM X has recovered from paused back to up"
Sometimes after a many pause and recovery the VM dies with
"VM X is down with error. Exit message: Lost connection with qemu process."
and I have to restart the VMs.
My questions.
1. How to work around this dying VM?
2. Is there a way to use sparse disks without VM being paused again and
again?
Thanks in advance.
Shantur
2 years, 6 months
about the vm disk type
by Tommy Sway
When I create the VM's image disk, I am not asked to select the following
type of disk.
What is the default value ?
Thanks.
QCOW2 Formatted Virtual Machine Storage
QCOW2 is a storage format for virtual disks. QCOW stands for QEMU
copy-on-write. The QCOW2 format decouples the physical storage layer from
the virtual layer by adding a mapping between logical and physical blocks.
Each logical block is mapped to its physical offset, which enables storage
over-commitment and virtual machine snapshots, where each QCOW volume only
represents changes made to an underlying virtual disk.
The initial mapping points all logical blocks to the offsets in the backing
file or volume. When a virtual machine writes data to a QCOW2 volume after a
snapshot, the relevant block is read from the backing volume, modified with
the new information and written into a new snapshot QCOW2 volume. Then the
map is updated to point to the new place.
Raw
The raw storage format has a performance advantage over QCOW2 in that no
formatting is applied to virtual disks stored in the raw format. Virtual
machine data operations on virtual disks stored in raw format require no
additional work from hosts. When a virtual machine writes data to a given
offset in its virtual disk, the I/O is written to the same offset on the
backing file or logical volume.
Raw format requires that the entire space of the defined image be
preallocated unless using externally managed thin provisioned LUNs from a
storage array.
2 years, 6 months
why cannot set the power management proxy server ?
by Tommy Sway
I can use ipmitool to send commands from any KVM host to other hosts in the
same cluster.
However, an internal error is reported when CONFIGURING power proxy on
engine.
How can I locate the fault?
2 years, 6 months
about the OVF_STORE and the xleases volume
by Tommy Sway
I wonder if the xleases volume mentioned here refers to ovf_store ?
* A new xleases volume to support VM leases - this feature adds the
ability to acquire a lease per virtual machine on shared storage without
attaching the lease to a virtual machine disk.
A VM lease offers two important capabilities:
* Avoiding split-brain.
* Starting a VM on another host if the original host becomes
non-responsive, which improves the availability of HA VMs.
2 years, 6 months
[ANN] oVirt 4.4.8 Async update #1
by Sandro Bonazzola
oVirt 4.4.8 Async update #1
On August 26th 2021 the oVirt project released an async update to the
following packages:
-
ovirt-ansible-collection 1.6.2
-
ovirt-engine 4.4.8.5
-
ovirt-release44 4.4.8.1
-
oVirt Node 4.4.8.1
-
oVirt Appliance 4.4-20210826
Fixing the following bugs:
-
Bug 1947709 <https://bugzilla.redhat.com/show_bug.cgi?id=1947709> -
[IPv6] HostedEngineLocal is an isolated libvirt network, breaking upgrades
from 4.3
-
Bug 1966873 <https://bugzilla.redhat.com/show_bug.cgi?id=1966873> -
[RFE] Create Ansible role for remove stale LUNs example
remove_mpath_device.yml
-
Bug 1997663 <https://bugzilla.redhat.com/show_bug.cgi?id=1997663> - Keep
cinbderlib dependencies optional for 4.4.8
-
Bug 1996816 <https://bugzilla.redhat.com/show_bug.cgi?id=1996816> -
Cluster upgrade fails with: 'OAuthException invalid_grant: The provided
authorization grant for the auth code has expired.
oVirt Node Changes:
- Consume above oVirt updates
- GlusterFS 8.6: https://docs.gluster.org/en/latest/release-notes/8.6/
- Fixes for:
-
CVE-2021-22923 <https://access.redhat.com/security/cve/CVE-2021-22923>
curl: Metalink download sends credentials
-
CVE-2021-22922 <https://access.redhat.com/security/cve/CVE-2021-22922>
curl: Content not matching hash in Metalink is not being discarded
Full diff list:
--- ovirt-node-ng-image-4.4.8.manifest-rpm 2021-08-19 07:57:44.081590739
+0200
+++ ovirt-node-ng-image-4.4.8.1.manifest-rpm 2021-08-27 08:11:54.863736688
+0200
@@ -2,7 +2,7 @@
-ModemManager-glib-1.10.8-3.el8.x86_64
-NetworkManager-1.32.6-1.el8.x86_64
-NetworkManager-config-server-1.32.6-1.el8.noarch
-NetworkManager-libnm-1.32.6-1.el8.x86_64
-NetworkManager-ovs-1.32.6-1.el8.x86_64
-NetworkManager-team-1.32.6-1.el8.x86_64
-NetworkManager-tui-1.32.6-1.el8.x86_64
+ModemManager-glib-1.10.8-4.el8.x86_64
+NetworkManager-1.32.8-1.el8.x86_64
+NetworkManager-config-server-1.32.8-1.el8.noarch
+NetworkManager-libnm-1.32.8-1.el8.x86_64
+NetworkManager-ovs-1.32.8-1.el8.x86_64
+NetworkManager-team-1.32.8-1.el8.x86_64
+NetworkManager-tui-1.32.8-1.el8.x86_64
@@ -94 +94 @@
-curl-7.61.1-18.el8.x86_64
+curl-7.61.1-18.el8_4.1.x86_64
@@ -106,4 +106,4 @@
-device-mapper-1.02.177-5.el8.x86_64
-device-mapper-event-1.02.177-5.el8.x86_64
-device-mapper-event-libs-1.02.177-5.el8.x86_64
-device-mapper-libs-1.02.177-5.el8.x86_64
+device-mapper-1.02.177-6.el8.x86_64
+device-mapper-event-1.02.177-6.el8.x86_64
+device-mapper-event-libs-1.02.177-6.el8.x86_64
+device-mapper-libs-1.02.177-6.el8.x86_64
@@ -140,36 +140,36 @@
-fence-agents-all-4.2.1-74.el8.x86_64
-fence-agents-amt-ws-4.2.1-74.el8.noarch
-fence-agents-apc-4.2.1-74.el8.noarch
-fence-agents-apc-snmp-4.2.1-74.el8.noarch
-fence-agents-bladecenter-4.2.1-74.el8.noarch
-fence-agents-brocade-4.2.1-74.el8.noarch
-fence-agents-cisco-mds-4.2.1-74.el8.noarch
-fence-agents-cisco-ucs-4.2.1-74.el8.noarch
-fence-agents-common-4.2.1-74.el8.noarch
-fence-agents-compute-4.2.1-74.el8.noarch
-fence-agents-drac5-4.2.1-74.el8.noarch
-fence-agents-eaton-snmp-4.2.1-74.el8.noarch
-fence-agents-emerson-4.2.1-74.el8.noarch
-fence-agents-eps-4.2.1-74.el8.noarch
-fence-agents-heuristics-ping-4.2.1-74.el8.noarch
-fence-agents-hpblade-4.2.1-74.el8.noarch
-fence-agents-ibmblade-4.2.1-74.el8.noarch
-fence-agents-ifmib-4.2.1-74.el8.noarch
-fence-agents-ilo-moonshot-4.2.1-74.el8.noarch
-fence-agents-ilo-mp-4.2.1-74.el8.noarch
-fence-agents-ilo-ssh-4.2.1-74.el8.noarch
-fence-agents-ilo2-4.2.1-74.el8.noarch
-fence-agents-intelmodular-4.2.1-74.el8.noarch
-fence-agents-ipdu-4.2.1-74.el8.noarch
-fence-agents-ipmilan-4.2.1-74.el8.noarch
-fence-agents-kdump-4.2.1-74.el8.x86_64
-fence-agents-mpath-4.2.1-74.el8.noarch
-fence-agents-redfish-4.2.1-74.el8.x86_64
-fence-agents-rhevm-4.2.1-74.el8.noarch
-fence-agents-rsa-4.2.1-74.el8.noarch
-fence-agents-rsb-4.2.1-74.el8.noarch
-fence-agents-sbd-4.2.1-74.el8.noarch
-fence-agents-scsi-4.2.1-74.el8.noarch
-fence-agents-vmware-rest-4.2.1-74.el8.noarch
-fence-agents-vmware-soap-4.2.1-74.el8.noarch
-fence-agents-wti-4.2.1-74.el8.noarch
+fence-agents-all-4.2.1-75.el8.x86_64
+fence-agents-amt-ws-4.2.1-75.el8.noarch
+fence-agents-apc-4.2.1-75.el8.noarch
+fence-agents-apc-snmp-4.2.1-75.el8.noarch
+fence-agents-bladecenter-4.2.1-75.el8.noarch
+fence-agents-brocade-4.2.1-75.el8.noarch
+fence-agents-cisco-mds-4.2.1-75.el8.noarch
+fence-agents-cisco-ucs-4.2.1-75.el8.noarch
+fence-agents-common-4.2.1-75.el8.noarch
+fence-agents-compute-4.2.1-75.el8.noarch
+fence-agents-drac5-4.2.1-75.el8.noarch
+fence-agents-eaton-snmp-4.2.1-75.el8.noarch
+fence-agents-emerson-4.2.1-75.el8.noarch
+fence-agents-eps-4.2.1-75.el8.noarch
+fence-agents-heuristics-ping-4.2.1-75.el8.noarch
+fence-agents-hpblade-4.2.1-75.el8.noarch
+fence-agents-ibmblade-4.2.1-75.el8.noarch
+fence-agents-ifmib-4.2.1-75.el8.noarch
+fence-agents-ilo-moonshot-4.2.1-75.el8.noarch
+fence-agents-ilo-mp-4.2.1-75.el8.noarch
+fence-agents-ilo-ssh-4.2.1-75.el8.noarch
+fence-agents-ilo2-4.2.1-75.el8.noarch
+fence-agents-intelmodular-4.2.1-75.el8.noarch
+fence-agents-ipdu-4.2.1-75.el8.noarch
+fence-agents-ipmilan-4.2.1-75.el8.noarch
+fence-agents-kdump-4.2.1-75.el8.x86_64
+fence-agents-mpath-4.2.1-75.el8.noarch
+fence-agents-redfish-4.2.1-75.el8.x86_64
+fence-agents-rhevm-4.2.1-75.el8.noarch
+fence-agents-rsa-4.2.1-75.el8.noarch
+fence-agents-rsb-4.2.1-75.el8.noarch
+fence-agents-sbd-4.2.1-75.el8.noarch
+fence-agents-scsi-4.2.1-75.el8.noarch
+fence-agents-vmware-rest-4.2.1-75.el8.noarch
+fence-agents-vmware-soap-4.2.1-75.el8.noarch
+fence-agents-wti-4.2.1-75.el8.noarch
@@ -215,7 +215,7 @@
-glusterfs-8.5-2.el8.x86_64
-glusterfs-cli-8.5-2.el8.x86_64
-glusterfs-client-xlators-8.5-2.el8.x86_64
-glusterfs-events-8.5-2.el8.x86_64
-glusterfs-fuse-8.5-2.el8.x86_64
-glusterfs-geo-replication-8.5-2.el8.x86_64
-glusterfs-server-8.5-2.el8.x86_64
+glusterfs-8.6-1.el8.x86_64
+glusterfs-cli-8.6-1.el8.x86_64
+glusterfs-client-xlators-8.6-1.el8.x86_64
+glusterfs-events-8.6-1.el8.x86_64
+glusterfs-fuse-8.6-1.el8.x86_64
+glusterfs-geo-replication-8.6-1.el8.x86_64
+glusterfs-server-8.6-1.el8.x86_64
@@ -301,5 +301,5 @@
-kernel-4.18.0-326.el8.x86_64
-kernel-core-4.18.0-326.el8.x86_64
-kernel-modules-4.18.0-326.el8.x86_64
-kernel-tools-4.18.0-326.el8.x86_64
-kernel-tools-libs-4.18.0-326.el8.x86_64
+kernel-4.18.0-331.el8.x86_64
+kernel-core-4.18.0-331.el8.x86_64
+kernel-modules-4.18.0-331.el8.x86_64
+kernel-tools-4.18.0-331.el8.x86_64
+kernel-tools-libs-4.18.0-331.el8.x86_64
@@ -310 +310 @@
-kmod-kvdo-6.2.5.65-79.el8.x86_64
+kmod-kvdo-6.2.5.72-79.el8.x86_64
@@ -363 +363 @@
-libcurl-7.61.1-18.el8.x86_64
+libcurl-7.61.1-18.el8_4.1.x86_64
@@ -381,6 +381,6 @@
-libgfapi0-8.5-2.el8.x86_64
-libgfchangelog0-8.5-2.el8.x86_64
-libgfrpc0-8.5-2.el8.x86_64
-libgfxdr0-8.5-2.el8.x86_64
-libglusterd0-8.5-2.el8.x86_64
-libglusterfs0-8.5-2.el8.x86_64
+libgfapi0-8.6-1.el8.x86_64
+libgfchangelog0-8.6-1.el8.x86_64
+libgfrpc0-8.6-1.el8.x86_64
+libgfxdr0-8.6-1.el8.x86_64
+libglusterd0-8.6-1.el8.x86_64
+libglusterfs0-8.6-1.el8.x86_64
@@ -416 +415,0 @@
-libmetalink-0.1.3-7.el8.x86_64
@@ -558,2 +557,2 @@
-lvm2-2.03.12-5.el8.x86_64
-lvm2-libs-2.03.12-5.el8.x86_64
+lvm2-2.03.12-6.el8.x86_64
+lvm2-libs-2.03.12-6.el8.x86_64
@@ -641 +640 @@
-ovirt-ansible-collection-1.6.0-1.el8.noarch
+ovirt-ansible-collection-1.6.2-1.el8.noarch
@@ -649 +648 @@
-ovirt-node-ng-image-update-placeholder-4.4.8-1.el8.noarch
+ovirt-node-ng-image-update-placeholder-4.4.8.1-1.el8.noarch
@@ -657,2 +656,2 @@
-ovirt-release-host-node-4.4.8-1.el8.noarch
-ovirt-release44-4.4.8-1.el8.noarch
+ovirt-release-host-node-4.4.8.1-1.el8.noarch
+ovirt-release44-4.4.8.1-1.el8.noarch
@@ -665,3 +664,3 @@
-pacemaker-cluster-libs-2.1.0-5.el8.x86_64
-pacemaker-libs-2.1.0-5.el8.x86_64
-pacemaker-schemas-2.1.0-5.el8.noarch
+pacemaker-cluster-libs-2.1.0-6.el8.x86_64
+pacemaker-libs-2.1.0-6.el8.x86_64
+pacemaker-schemas-2.1.0-6.el8.noarch
@@ -773 +772 @@
-python3-gluster-8.5-2.el8.x86_64
+python3-gluster-8.6-1.el8.x86_64
@@ -835 +834 @@
-python3-perf-4.18.0-326.el8.x86_64
+python3-perf-4.18.0-331.el8.x86_64
@@ -935,2 +934,2 @@
-selinux-policy-3.14.3-75.el8.noarch
-selinux-policy-targeted-3.14.3-75.el8.noarch
+selinux-policy-3.14.3-76.el8.noarch
+selinux-policy-targeted-3.14.3-76.el8.noarch
@@ -941 +940 @@
-shadow-utils-4.6-13.el8.x86_64
+shadow-utils-4.6-14.el8.x86_64
@@ -948 +947 @@
-sos-4.1-4.el8.noarch
+sos-4.1-5.el8.noarch
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 7 months
Frequent events seen like VM "vmname" is not responding
by manoj.sharma99765@gmail.com
I am using oVirt Open Virtualization Manager Software Version:4.4.6.8-1.el8 and we have single node running on oVirt Node 4.4.6.
We are facing slowness while using VM and also we face connection reset issue between VM and Target servers ( Nexus )
We are getting below logs in ovirt manager events
VM "vmname" is not responding
Getting this for all the running VMs .
We checked resources and utilization of the node everything seems very normal. Could you please guide on how can we solve/debug further on this ?
2 years, 7 months
Cannot activate a Storage Domain after an oVirt crash
by nicolas@devels.es
Hi,
We're running oVirt 4.3.8 and we recently had a oVirt crash after moving
too much disks between storage domains.
Concretely, one of the Storage Domains reports status "Unknown",
"Total/Free/Guaranteed free spaces" are "[N/A]".
After trying to activate it in the Domain Center we see messages like
these from all of the hosts:
VDSM hostX command GetVGInfoVDS failed: Volume Group does not exist:
(u'vg_uuid: Usi3y8-S4eq-EXtl-FA58-MA3K-b4vE-4d9SCp',)
I tried putting the Storage Domain in maintenance and it fails with
messages like:
Storage Domain iaasb13 (Data Center KVMRojo) was deactivated by
system because it's not visible by any of the hosts.
Failed to update OVF disks 8661acd1-d1c4-44a0-a4d4-ddee834844e9, OVF
data isn't updated on those OVF stores (Data Center KVMRojo, Storage
Domain iaasb13).
Failed to update VMs/Templates OVF data for Storage Domain iaasb13
in Data Center KVMRojo.
I'm sure the storage domain backend is up and running, and the LUN being
exported.
Any hints how can I debug this problem and restore the Storage Domain?
Thanks.
2 years, 7 months