Q: Error ID 119 - ioctl(KVM_CREATE_VM) failed: Cannot allocate memory
by Andrei Verovski
Hi !
Does this error message means node runs out of RAM?
Quite strange since there are about 20 GB of free RAN, and VM requires
only 8.
I noticed this happen till Vinchin was in a process of removing snapshot
from another VM.
Thanks.
VM Jumis_VM-Active is down with error. Exit message: internal error:
process exited while connecting to monitor: ioctl(KVM_CREATE_VM) failed:
12 Cannot allocate memory
2022-06-22T19:50:46.501704Z qemu-kvm: failed to initialize KVM: Cannot
allocate memory.
2 years, 5 months
Move Master Storage Domain
by john@penrod.net
Running oVirt 4.3.10.
Three storage domains. #1 is the current master. No disks/data remain on it. All virtual machines have been moved.
I plan to put it in maintenance mode and let it force an election. I don't care which of the remaining two become master.
Will this impact virtual machines that are currently running? If possible, I need to keep them running.
Thoughts?
2 years, 5 months
Cannot export VM to another data domain. "o.original_template is undefined"
by Gilboa Davara
Hello,
4.4.10 Gluster based cluster w/ 3 nodes.
I'm backing up all VMs before upgrading the setup to 4.5. (Hopefully done
right before 4.5.1 w/ Gluster fix is released).
When trying to export a VM to another data domain, several VMs show the
following error: "Export VM Failed. o.original_template is undefined".
Engine log looks clean (I see nothing about the failed upgrade).
Any idea what's broken?
- Gilboa
2 years, 5 months
Gluster Volume cannot be activated Ovirt 4.5 Centos 8 Stream
by m.rohweder@itm-h.de
Hi,
i convertet my ovirt 4.5 tu ovirt 4.5 hyperconverged.
(activate on cluster the gluster service and reinstalled all hosts)
I can create bricks on hosts and i'm able to create a Volume. All with the Ovirt GUI.
But if i want to activate the Volume, i get the error mesage that no host with running gluster is found in my cluster.
All hosts show message that ovirt think glusterd ist not running, but its running on all hosts.
What can i du to use gluster with the lokal storage on each host?
Greatings Michael
2 years, 5 months
Ovirt 4.4 - starting just-installed host from ovirt console fails
by David Johnson
Good afternoon all,
I recently had to rebuild my cluster due to a self inflicted error.
I have finally managed to get the ovirt host software installed and
communicating on all hosts.
The first host installed and started cleanly. However, after installation
the second host is failing to start. Prior to my cluster crash, this host
was running well in the cluster.
During the downtime, we applied microcode and BIOS updates as part of the
recovery process.
I have reviewed this chain:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/N3PPT34GBRLP...
and reached a dead end.
Based on what I see (in the long stream of logs and info following), it
looks like I should change the cluster CPU type from Cascadelake to Haswell
to restore normal operation.
The long involved stuff:
The Engine reports:
[image: image.png]
Host CPU type is not compatible with Cluster Properties.
[image: image.png]
The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags:
model_Cascadelake-Server-noTSX. Please update the host CPU microcode or
change the Cluster CPU Type.
The Cluster definition is:
[image: image.png]
*lscpu returns:*
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Stepping: 2
CPU MHz: 3300.000
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4988.45
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl
vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp
tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2
smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat
pln pts md_clear flush_l1d
*cpuid returns:*
CPU 0:
vendor_id = "GenuineIntel"
version information (1/eax):
processor type = primary processor (0)
family = 0x6 (6)
model = 0xf (15)
stepping id = 0x2 (2)
extended family = 0x0 (0)
extended model = 0x3 (3)
(family synth) = 0x6 (6)
(model synth) = 0x3f (63)
(simple synth) = Intel (unknown type) (Haswell C1/M1/R2) {Haswell},
22nm
*virsh domcapabilities returns:*
<domainCapabilities>
<path>/usr/libexec/qemu-kvm</path>
<domain>kvm</domain>
<machine>pc-i440fx-rhel7.6.0</machine>
<arch>x86_64</arch>
<vcpu max='240'/>
<iothreads supported='yes'/>
<os supported='yes'>
<enum name='firmware'/>
<loader supported='yes'>
<value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
<enum name='type'>
<value>rom</value>
<value>pflash</value>
</enum>
<enum name='readonly'>
<value>yes</value>
<value>no</value>
</enum>
<enum name='secure'>
<value>no</value>
</enum>
</loader>
</os>
<cpu>
<mode name='host-passthrough' supported='yes'>
<enum name='hostPassthroughMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='maximum' supported='yes'>
<enum name='maximumMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='host-model' supported='yes'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='pdcm'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='invtsc'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='ibrs'/>
<feature policy='require' name='amd-stibp'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='require' name='pschange-mc-no'/>
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere-IBRS</model>
<model usable='yes'>Westmere</model>
<model usable='no'>Snowridge</model>
<model usable='no'>Skylake-Server-noTSX-IBRS</model>
<model usable='no'>Skylake-Server-IBRS</model>
<model usable='no'>Skylake-Server</model>
<model usable='no'>Skylake-Client-noTSX-IBRS</model>
<model usable='no'>Skylake-Client-IBRS</model>
<model usable='no'>Skylake-Client</model>
<model usable='yes'>SandyBridge-IBRS</model>
<model usable='yes'>SandyBridge</model>
<model usable='yes'>Penryn</model>
<model usable='no'>Opteron_G5</model>
<model usable='no'>Opteron_G4</model>
<model usable='no'>Opteron_G3</model>
<model usable='yes'>Opteron_G2</model>
<model usable='yes'>Opteron_G1</model>
<model usable='yes'>Nehalem-IBRS</model>
<model usable='yes'>Nehalem</model>
<model usable='yes'>IvyBridge-IBRS</model>
<model usable='yes'>IvyBridge</model>
<model usable='no'>Icelake-Server-noTSX</model>
<model usable='no'>Icelake-Server</model>
<model usable='no' deprecated='yes'>Icelake-Client-noTSX</model>
<model usable='no' deprecated='yes'>Icelake-Client</model>
<model usable='yes'>Haswell-noTSX-IBRS</model>
<model usable='yes'>Haswell-noTSX</model>
<model usable='no'>Haswell-IBRS</model>
<model usable='no'>Haswell</model>
<model usable='no'>EPYC-Rome</model>
<model usable='no'>EPYC-Milan</model>
<model usable='no'>EPYC-IBPB</model>
<model usable='no'>EPYC</model>
<model usable='no'>Dhyana</model>
<model usable='no'>Cooperlake</model>
<model usable='yes'>Conroe</model>
<model usable='no'>Cascadelake-Server-noTSX</model>
<model usable='no'>Cascadelake-Server</model>
<model usable='no'>Broadwell-noTSX-IBRS</model>
<model usable='no'>Broadwell-noTSX</model>
<model usable='no'>Broadwell-IBRS</model>
<model usable='no'>Broadwell</model>
<model usable='yes'>486</model>
</mode>
</cpu>
<memoryBacking supported='yes'>
<enum name='sourceType'>
<value>file</value>
<value>anonymous</value>
<value>memfd</value>
</enum>
</memoryBacking>
<devices>
<disk supported='yes'>
<enum name='diskDevice'>
<value>disk</value>
<value>cdrom</value>
<value>floppy</value>
<value>lun</value>
</enum>
<enum name='bus'>
<value>ide</value>
<value>fdc</value>
<value>scsi</value>
<value>virtio</value>
<value>usb</value>
<value>sata</value>
</enum>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
</disk>
<graphics supported='yes'>
<enum name='type'>
<value>vnc</value>
<value>spice</value>
<value>egl-headless</value>
</enum>
</graphics>
<video supported='yes'>
<enum name='modelType'>
<value>vga</value>
<value>cirrus</value>
<value>qxl</value>
<value>virtio</value>
<value>none</value>
<value>bochs</value>
<value>ramfb</value>
</enum>
</video>
<hostdev supported='yes'>
<enum name='mode'>
<value>subsystem</value>
</enum>
<enum name='startupPolicy'>
<value>default</value>
<value>mandatory</value>
<value>requisite</value>
<value>optional</value>
</enum>
<enum name='subsysType'>
<value>usb</value>
<value>pci</value>
<value>scsi</value>
</enum>
<enum name='capsType'/>
<enum name='pciBackend'/>
</hostdev>
<rng supported='yes'>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
<enum name='backendModel'>
<value>random</value>
<value>egd</value>
<value>builtin</value>
</enum>
</rng>
<filesystem supported='yes'>
<enum name='driverType'>
<value>path</value>
<value>handle</value>
<value>virtiofs</value>
</enum>
</filesystem>
<tpm supported='yes'>
<enum name='model'>
<value>tpm-tis</value>
<value>tpm-crb</value>
</enum>
<enum name='backendModel'>
<value>passthrough</value>
<value>emulator</value>
</enum>
</tpm>
</devices>
<features>
<gic supported='no'/>
<vmcoreinfo supported='yes'/>
<genid supported='yes'/>
<backingStoreInput supported='yes'/>
<backup supported='yes'/>
<sev supported='no'/>
</features>
</domainCapabilities>
Please advise.
2 years, 5 months
Preferred RHEL Based Distro For oVirt
by Clint Boggio
Good Day All;
i am inquiring about which RHEL based distros are currently preferred and which ones are currently supported. I know the oVirt project is a RH entity and so RHEL and CentOS-Stream are the base offering. Would it, or is it, feasible for Rocky 8.X, or Alma 8.X to be the base OS for an oVirt deployment seeing as though they both RHEL clones ?
How confident is the user community in the stability of CentOS-Stream in terms of production use as compared to Alma or Rocky ?
2 years, 5 months
failed to mount hosted engine gluster storage - how to debug?
by diego.ercolani@ssis.sm
Hello, I have an issue probably related to my particular implementation but I think some controls are missing
Here is the story.
I have a cluster of two nodes 4.4.10.3 with an upgraded kernel as the cpu (Ryzen 5) suffer from an incompatibility issue with the kernel provided by 4.4.10.x series.
On each node there are three glusterfs "partitions" in replica mode, one for the hosted_engine, the other two are for user usage.
The third node was an old i3 workstation only used to provide the arbiter partition to the glusterfs cluster.
I installed a new server (ryzen processor) with 4.5.0 and successfully installed glusterfs 10.1 and inserted the arbiter bricks implemented on glusterfs 10.1 while the replica bricks are 8.6 after removing the old i3 provided bricks.
I successfully imported the new node in the ovirt engine (after updating the engine to 4.5)
The problem is that the ovirt-ha-broker doesn't start complaining that is not possible to connect the storage. (I suppose the hosted_engine storage) so I did some digs that I'm going to show here:
####
1. The node seem to be correctly configured:
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
[root@ovirt-node3 devices]# vdsm-tool configure
Checking configuration status...
libvirt is already configured for vdsm
SUCCESS: ssl configured to true. No conflicts
sanlock is configured for vdsm
Managed volume database is already configured
lvm is configured for vdsm
Current revision of multipath.conf detected, preserving
Running configure...
Done configuring modules to VDSM.
[root@ovirt-node3 devices]# vdsm-tool validate-config
SUCCESS: ssl configured to true. No conflicts
####
2. the node refuses to mount via hosted-engine (same error in broker.log)
[root@ovirt-node3 devices]# hosted-engine --connect-storage
Traceback (most recent call last):
File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_setup/connect_storage_server.py", line 30, in <module>
timeout=ohostedcons.Const.STORAGE_SERVER_TIMEOUT,
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/client/client.py", line 312, in connect_storage_server
sserver.connect_storage_server(timeout=timeout)
File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_server.py", line 451, in connect_storage_server
'Connection to storage server failed'
RuntimeError: Connection to storage server failed
#####
3. manually mount of glusterfs work correctly
[root@ovirt-node3 devices]# grep storage /etc/ovirt-hosted-engine/hosted-engine.conf
storage=ovirt-node2.ovirt:/gveng
# The following are used only for iSCSI storage
[root@ovirt-node3 devices]#
[root@ovirt-node3 devices]# mount -t glusterfs ovirt-node2.ovirt:/gveng /mnt/tmp/
[root@ovirt-node3 devices]# ls -l /mnt/tmp
total 0
drwxr-xr-x. 6 vdsm kvm 64 Dec 15 19:04 7b8f1cc9-e3de-401f-b97f-8c281ca30482
What else should I control? Thank you and sorry for the long message
Diego
2 years, 5 months
moVirt delisted from Google Play
by Filip Krepinsky
Hi all,
Unfortunately, we decided to delist moVirt from Google Play. As you might
have noticed, the app has not been maintained for some time and also the
main repository https://github.com/oVirt/moVirt has been archived. It is
not possible for us to maintain store presence due to new requirements for
Google APIs (and thus our libraries that are obsolete at this point), app
interoperability and in general to fulfill expectations of our users.
For users who still wish to use moVirt, you can keep your current
application installed or download an APK (
https://github.com/oVirt/moVirt/releases/tag/v2.1) and install moVirt
manually. Just keep in mind that the app might not behave properly (mostly
on newer versions of Android).
I hope moVirt has been helpful with managing your envs :)
Best regards,
Filip
2 years, 5 months