import vm disk image on ISCSI Data Domain
by sultanu@inbox.ru
Hello, all
Maybe somebody have experiance.
I have cluster 3 hosts + ISCSI storage as Data Domain. How can i upload vm image from my old ovirt (baremetal with local storage) to my new cluster from CLI, NOT from WEB UI?
Dowload on my local computer and post from WEB UI it take long period of time.
Thak you for your time.
2 years, 6 months
Lost space in /var/log
by Andrei Verovski
Hi !
I have strange situation with low disk space on /var/log/, yet I can't
figure out what have consumed so much space.
Look at the du output.
Thanks in advance for any suggestion(s).
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos-var 15G 1.3G 13G 9% /var
/dev/mapper/centos-var_log 7.8G 7.1G 293M 97% /var/log
/dev/mapper/centos-var_log_audit 2.0G 41M 1.8G 3% /var/log/audit
#
# du -ah /var/log | sort -n -r | head -n 20
664K /var/log/vdsm
660K /var/log/vdsm/vdsm.log
624K /var/log/gdm
580K /var/log/anaconda/storage.log
480K /var/log/anaconda/packaging.log
380K /var/log/gdm/:0.log.4
316K /var/log/anaconda/syslog
276K /var/log/tuned
252K /var/log/libvirt/qemu/NextCloud-Collabora-LVM.log-20210806
220K /var/log/Xorg.0.log
168K /var/log/gdm/:0.log
156K /var/log/yum.log-20191117
156K /var/log/secure-20180726
132K /var/log/libvirt/qemu/NextCloud-Collabora-Active.log
128K /var/log/anaconda/anaconda.log
120K /var/log/hp-snmp-agents
116K /var/log/hp-snmp-agents/cma.log
112K /var/log/vinchin
104K /var/log/vinchin/kvm_backup_service
104K /var/log/tuned/tuned.log.2
#
# find . -printf '%s %p\n'| sort -nr | head -10
16311686 ./rhsm/rhsm.log
4070423 ./cron-20180726
3667146 ./anaconda/journal.log
3409071 ./secure
3066660 ./rhsm/rhsm.log-20180726
2912670 ./audit/audit.log
1418007 ./sanlock.log
1189580 ./vdsm/vdsm.log
592718 ./anaconda/storage.log
487567 ./anaconda/packaging.log
2 years, 6 months
Re: Lost space in /var/log
by Michael Thomas
This can also happen with a misconfigured logrotate config. If a
process is writing to a large log file, and logrotate comes along and
removes it, then the process still has an open filehandle to the large
file even though you can't see it. The space won't get removed until
the process closes the filehandle (eg rebooting).
The following command should give you a list of these ghost files that
are still open but have been removed from the directory tree:
lsof | grep '(deleted)'
Stackexchange has some additional useful tips:
https://unix.stackexchange.com/questions/68523/find-and-remove-large-file...
--Mike
On 6/22/22 15:49, Matthew.Stier(a)fujitsu.com wrote:
> Deleted some files to "clean up" /var/log, but the space was not recovered?
>
> Space for deleted files are only recovered when all references to that space are removed. This includes the directory reference, and any open file-handles to the file.
>
> The is a popular trick for a self-deleting scratch file. Create a file, open file-handle to it, and then remove the file. The open file-handle can then be written to, and read from, as long as you like, but when the file-handle is closed, explicitly or implicitly, the space is automatically recovered.
>
>
> -----Original Message-----
> From: Andrei Verovski <andreil1(a)starlett.lv>
> Sent: Wednesday, June 22, 2022 3:27 PM
> To: users(a)ovirt.org
> Subject: [ovirt-users] Lost space in /var/log
>
> Hi !
>
> I have strange situation with low disk space on /var/log/, yet I can't figure out what have consumed so much space.
> Look at the du output.
>
> Thanks in advance for any suggestion(s).
>
>
> # df -h
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/mapper/centos-var 15G 1.3G 13G 9% /var
> /dev/mapper/centos-var_log 7.8G 7.1G 293M 97% /var/log
> /dev/mapper/centos-var_log_audit 2.0G 41M 1.8G 3% /var/log/audit
>
> #
> # du -ah /var/log | sort -n -r | head -n 20
>
> 664K /var/log/vdsm
> 660K /var/log/vdsm/vdsm.log
> 624K /var/log/gdm
> 580K /var/log/anaconda/storage.log
> 480K /var/log/anaconda/packaging.log
> 380K /var/log/gdm/:0.log.4
> 316K /var/log/anaconda/syslog
> 276K /var/log/tuned
> 252K /var/log/libvirt/qemu/NextCloud-Collabora-LVM.log-20210806
> 220K /var/log/Xorg.0.log
> 168K /var/log/gdm/:0.log
> 156K /var/log/yum.log-20191117
> 156K /var/log/secure-20180726
> 132K /var/log/libvirt/qemu/NextCloud-Collabora-Active.log
> 128K /var/log/anaconda/anaconda.log
> 120K /var/log/hp-snmp-agents
> 116K /var/log/hp-snmp-agents/cma.log
> 112K /var/log/vinchin
> 104K /var/log/vinchin/kvm_backup_service
> 104K /var/log/tuned/tuned.log.2
>
> #
> # find . -printf '%s %p\n'| sort -nr | head -10
>
> 16311686 ./rhsm/rhsm.log
> 4070423 ./cron-20180726
> 3667146 ./anaconda/journal.log
> 3409071 ./secure
> 3066660 ./rhsm/rhsm.log-20180726
> 2912670 ./audit/audit.log
> 1418007 ./sanlock.log
> 1189580 ./vdsm/vdsm.log
> 592718 ./anaconda/storage.log
> 487567 ./anaconda/packaging.log
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N566CST3DAX...
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JCAQU32UZSE...
2 years, 6 months
Q: Error ID 119 - ioctl(KVM_CREATE_VM) failed: Cannot allocate memory
by Andrei Verovski
Hi !
Does this error message means node runs out of RAM?
Quite strange since there are about 20 GB of free RAN, and VM requires
only 8.
I noticed this happen till Vinchin was in a process of removing snapshot
from another VM.
Thanks.
VM Jumis_VM-Active is down with error. Exit message: internal error:
process exited while connecting to monitor: ioctl(KVM_CREATE_VM) failed:
12 Cannot allocate memory
2022-06-22T19:50:46.501704Z qemu-kvm: failed to initialize KVM: Cannot
allocate memory.
2 years, 6 months
Move Master Storage Domain
by john@penrod.net
Running oVirt 4.3.10.
Three storage domains. #1 is the current master. No disks/data remain on it. All virtual machines have been moved.
I plan to put it in maintenance mode and let it force an election. I don't care which of the remaining two become master.
Will this impact virtual machines that are currently running? If possible, I need to keep them running.
Thoughts?
2 years, 6 months
Cannot export VM to another data domain. "o.original_template is undefined"
by Gilboa Davara
Hello,
4.4.10 Gluster based cluster w/ 3 nodes.
I'm backing up all VMs before upgrading the setup to 4.5. (Hopefully done
right before 4.5.1 w/ Gluster fix is released).
When trying to export a VM to another data domain, several VMs show the
following error: "Export VM Failed. o.original_template is undefined".
Engine log looks clean (I see nothing about the failed upgrade).
Any idea what's broken?
- Gilboa
2 years, 6 months
Gluster Volume cannot be activated Ovirt 4.5 Centos 8 Stream
by m.rohweder@itm-h.de
Hi,
i convertet my ovirt 4.5 tu ovirt 4.5 hyperconverged.
(activate on cluster the gluster service and reinstalled all hosts)
I can create bricks on hosts and i'm able to create a Volume. All with the Ovirt GUI.
But if i want to activate the Volume, i get the error mesage that no host with running gluster is found in my cluster.
All hosts show message that ovirt think glusterd ist not running, but its running on all hosts.
What can i du to use gluster with the lokal storage on each host?
Greatings Michael
2 years, 6 months
Ovirt 4.4 - starting just-installed host from ovirt console fails
by David Johnson
Good afternoon all,
I recently had to rebuild my cluster due to a self inflicted error.
I have finally managed to get the ovirt host software installed and
communicating on all hosts.
The first host installed and started cleanly. However, after installation
the second host is failing to start. Prior to my cluster crash, this host
was running well in the cluster.
During the downtime, we applied microcode and BIOS updates as part of the
recovery process.
I have reviewed this chain:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/N3PPT34GBRLP...
and reached a dead end.
Based on what I see (in the long stream of logs and info following), it
looks like I should change the cluster CPU type from Cascadelake to Haswell
to restore normal operation.
The long involved stuff:
The Engine reports:
[image: image.png]
Host CPU type is not compatible with Cluster Properties.
[image: image.png]
The host CPU does not match the Cluster CPU Type and is running in a
degraded mode. It is missing the following CPU flags:
model_Cascadelake-Server-noTSX. Please update the host CPU microcode or
change the Cluster CPU Type.
The Cluster definition is:
[image: image.png]
*lscpu returns:*
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
BIOS Vendor ID: Intel(R) Corporation
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
BIOS Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
Stepping: 2
CPU MHz: 3300.000
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4988.45
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl
vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm
cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp
tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2
smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat
pln pts md_clear flush_l1d
*cpuid returns:*
CPU 0:
vendor_id = "GenuineIntel"
version information (1/eax):
processor type = primary processor (0)
family = 0x6 (6)
model = 0xf (15)
stepping id = 0x2 (2)
extended family = 0x0 (0)
extended model = 0x3 (3)
(family synth) = 0x6 (6)
(model synth) = 0x3f (63)
(simple synth) = Intel (unknown type) (Haswell C1/M1/R2) {Haswell},
22nm
*virsh domcapabilities returns:*
<domainCapabilities>
<path>/usr/libexec/qemu-kvm</path>
<domain>kvm</domain>
<machine>pc-i440fx-rhel7.6.0</machine>
<arch>x86_64</arch>
<vcpu max='240'/>
<iothreads supported='yes'/>
<os supported='yes'>
<enum name='firmware'/>
<loader supported='yes'>
<value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
<enum name='type'>
<value>rom</value>
<value>pflash</value>
</enum>
<enum name='readonly'>
<value>yes</value>
<value>no</value>
</enum>
<enum name='secure'>
<value>no</value>
</enum>
</loader>
</os>
<cpu>
<mode name='host-passthrough' supported='yes'>
<enum name='hostPassthroughMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='maximum' supported='yes'>
<enum name='maximumMigratable'>
<value>on</value>
<value>off</value>
</enum>
</mode>
<mode name='host-model' supported='yes'>
<model fallback='forbid'>Haswell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='vme'/>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='pdcm'/>
<feature policy='require' name='f16c'/>
<feature policy='require' name='rdrand'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='abm'/>
<feature policy='require' name='invtsc'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='ibrs'/>
<feature policy='require' name='amd-stibp'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='require' name='pschange-mc-no'/>
</mode>
<mode name='custom' supported='yes'>
<model usable='yes'>qemu64</model>
<model usable='yes'>qemu32</model>
<model usable='no'>phenom</model>
<model usable='yes'>pentium3</model>
<model usable='yes'>pentium2</model>
<model usable='yes'>pentium</model>
<model usable='yes'>n270</model>
<model usable='yes'>kvm64</model>
<model usable='yes'>kvm32</model>
<model usable='yes'>coreduo</model>
<model usable='yes'>core2duo</model>
<model usable='no'>athlon</model>
<model usable='yes'>Westmere-IBRS</model>
<model usable='yes'>Westmere</model>
<model usable='no'>Snowridge</model>
<model usable='no'>Skylake-Server-noTSX-IBRS</model>
<model usable='no'>Skylake-Server-IBRS</model>
<model usable='no'>Skylake-Server</model>
<model usable='no'>Skylake-Client-noTSX-IBRS</model>
<model usable='no'>Skylake-Client-IBRS</model>
<model usable='no'>Skylake-Client</model>
<model usable='yes'>SandyBridge-IBRS</model>
<model usable='yes'>SandyBridge</model>
<model usable='yes'>Penryn</model>
<model usable='no'>Opteron_G5</model>
<model usable='no'>Opteron_G4</model>
<model usable='no'>Opteron_G3</model>
<model usable='yes'>Opteron_G2</model>
<model usable='yes'>Opteron_G1</model>
<model usable='yes'>Nehalem-IBRS</model>
<model usable='yes'>Nehalem</model>
<model usable='yes'>IvyBridge-IBRS</model>
<model usable='yes'>IvyBridge</model>
<model usable='no'>Icelake-Server-noTSX</model>
<model usable='no'>Icelake-Server</model>
<model usable='no' deprecated='yes'>Icelake-Client-noTSX</model>
<model usable='no' deprecated='yes'>Icelake-Client</model>
<model usable='yes'>Haswell-noTSX-IBRS</model>
<model usable='yes'>Haswell-noTSX</model>
<model usable='no'>Haswell-IBRS</model>
<model usable='no'>Haswell</model>
<model usable='no'>EPYC-Rome</model>
<model usable='no'>EPYC-Milan</model>
<model usable='no'>EPYC-IBPB</model>
<model usable='no'>EPYC</model>
<model usable='no'>Dhyana</model>
<model usable='no'>Cooperlake</model>
<model usable='yes'>Conroe</model>
<model usable='no'>Cascadelake-Server-noTSX</model>
<model usable='no'>Cascadelake-Server</model>
<model usable='no'>Broadwell-noTSX-IBRS</model>
<model usable='no'>Broadwell-noTSX</model>
<model usable='no'>Broadwell-IBRS</model>
<model usable='no'>Broadwell</model>
<model usable='yes'>486</model>
</mode>
</cpu>
<memoryBacking supported='yes'>
<enum name='sourceType'>
<value>file</value>
<value>anonymous</value>
<value>memfd</value>
</enum>
</memoryBacking>
<devices>
<disk supported='yes'>
<enum name='diskDevice'>
<value>disk</value>
<value>cdrom</value>
<value>floppy</value>
<value>lun</value>
</enum>
<enum name='bus'>
<value>ide</value>
<value>fdc</value>
<value>scsi</value>
<value>virtio</value>
<value>usb</value>
<value>sata</value>
</enum>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
</disk>
<graphics supported='yes'>
<enum name='type'>
<value>vnc</value>
<value>spice</value>
<value>egl-headless</value>
</enum>
</graphics>
<video supported='yes'>
<enum name='modelType'>
<value>vga</value>
<value>cirrus</value>
<value>qxl</value>
<value>virtio</value>
<value>none</value>
<value>bochs</value>
<value>ramfb</value>
</enum>
</video>
<hostdev supported='yes'>
<enum name='mode'>
<value>subsystem</value>
</enum>
<enum name='startupPolicy'>
<value>default</value>
<value>mandatory</value>
<value>requisite</value>
<value>optional</value>
</enum>
<enum name='subsysType'>
<value>usb</value>
<value>pci</value>
<value>scsi</value>
</enum>
<enum name='capsType'/>
<enum name='pciBackend'/>
</hostdev>
<rng supported='yes'>
<enum name='model'>
<value>virtio</value>
<value>virtio-transitional</value>
<value>virtio-non-transitional</value>
</enum>
<enum name='backendModel'>
<value>random</value>
<value>egd</value>
<value>builtin</value>
</enum>
</rng>
<filesystem supported='yes'>
<enum name='driverType'>
<value>path</value>
<value>handle</value>
<value>virtiofs</value>
</enum>
</filesystem>
<tpm supported='yes'>
<enum name='model'>
<value>tpm-tis</value>
<value>tpm-crb</value>
</enum>
<enum name='backendModel'>
<value>passthrough</value>
<value>emulator</value>
</enum>
</tpm>
</devices>
<features>
<gic supported='no'/>
<vmcoreinfo supported='yes'/>
<genid supported='yes'/>
<backingStoreInput supported='yes'/>
<backup supported='yes'/>
<sev supported='no'/>
</features>
</domainCapabilities>
Please advise.
2 years, 6 months
Preferred RHEL Based Distro For oVirt
by Clint Boggio
Good Day All;
i am inquiring about which RHEL based distros are currently preferred and which ones are currently supported. I know the oVirt project is a RH entity and so RHEL and CentOS-Stream are the base offering. Would it, or is it, feasible for Rocky 8.X, or Alma 8.X to be the base OS for an oVirt deployment seeing as though they both RHEL clones ?
How confident is the user community in the stability of CentOS-Stream in terms of production use as compared to Alma or Rocky ?
2 years, 6 months