oVirt 4.2 Limits
by Markus Frei
Is there any reference stating what oVirt 4.2 is really capable of?
- Maximum number of vCPU per VM
- Maximum amount of RAM per VM
- Maximum number of Disks per VM
- Maximum amount of Disk Capacity per VM
- Maximum number of Virtual Networks
- Maximum number of Virtual Networks per VM
...
6 years, 2 months
ovirt4 api create snapshot
by David David
hi
i have a vm with 3 disks and i want to take a snapshot with only two disks
how to do a multiple disk snapshot in the code below?
snap = snaps_service.add(
snapshot=types.Snapshot(
description=snap_description,
persist_memorystate=False,
disk_attachments=[
types.DiskAttachment(
disk=types.Disk(
id=disk_id
)
)
]
),
)
6 years, 2 months
vm operation after host fence
by Kapetanakis Giannis
Hi,
Last night we have an incident of a failed host. Engine issued a fence but did not restart the vms running on that node on other operational hosts. I'd like to know if this is normal or I can tune it somehow.
Here are some relevant logs from engine:
2018-09-05 03:00:51,496+03 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (EE-ManagedThreadFactory-engine-Thread-827644) [] Host 'v3' is not responding. It will stay in Connecting state for a grace period of 63 seconds and after that an attempt to fence the host will be issued.
2018-09-05 03:01:11,945+03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher] (EE-ManagedThreadFactory-engineScheduled-Thread-57) [] Failed to fetch vms info for host 'v3' - skipping VMs monitoring.
2018-09-05 03:01:48,028+03 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-827679) [] EVENT_ID: VM_SET_TO_UNKNOWN_STATUS(142), VM vm7 was set to the Unknown status.
2018-09-05 03:02:10,033+03 INFO [org.ovirt.engine.core.bll.pm.StopVdsCommand] (EE-ManagedThreadFactory-engine-Thread-827680) [30369e01] Power-Management: STOP of host 'v3' initiated.
2018-09-05 03:02:55,935+03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-827680) [3adcac38] EVENT_ID: VM_WAS_SET_DOWN_DUE_TO_HOST_REBOOT_OR_MANUAL_FENCE(143), Vm vm7 was shut down due to v3 host reboot or manual fence
2018-09-05 03:02:56,018+03 INFO [org.ovirt.engine.core.bll.pm.StopVdsCommand] (EE-ManagedThreadFactory-engine-Thread-827680) [ea0f582] Power-Management: STOP host 'v3' succeeded.
2018-09-05 03:08:20,818+03 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-91) [326878] EVENT_ID: VDS_DETECTED(13), Status of host v3 was set to Up.
2018-09-05 03:08:23,391+03 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-88) [] VM '3b1262ef-7fff-40af-b85e-9fd01a4f422b'(vm7) was unexpectedly detected as 'Down' on VDS '4970369d-21c2-467d-9247-c73ca2d71b3e'(v3) (expected on 'null')
As you can see, engine does a fence on node v3.
vm7 as well as the others running on that node did not re-start.
any tips?
engine is ovirt-engine-4.2.5.3-1.el7.noarch and host is vdsm-4.20.35-1.el7.x86_64
best regards,
Giannis
6 years, 2 months
oVirt: Training, Webinars, Forums etc!!
by femi adegoke
Are there any plans for training's/webinars/workshops/documentation etc.
Anything to help learn, process more info & get more comfortable with oVirt?
6 years, 2 months
OVN and impact of missing connectivity with OVN Provider/Central
by Gianluca Cecchi
Hello,
I have VM1 and VM2 with their vnics on OVN.
They are running on the same host.
Suppose this host (and so its OVN Controller) looses connectivity with the
OVN Provider (that in my case runs on oVirt engine, that is an external
server).
Is it correct/expected that VM1 looses connectivity with VM2 until fixed?
So, in other words, is the OVN Provider a sort of single point of failure
(eg if I restart enigine in my case)?
Thanks,
Gianluca
6 years, 2 months
Engine Setup Error
by Sakhi Hadebe
Hi,
We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command
"hosted-engine --deploy".
After providing answers it runs the ansible script and hit the Error when
creating glusterfs storage domain. Attached the screenshot of the ERROR.
Please help.
6 years, 2 months
problem with multipath.conf
by g.vasilopoulos@uoc.gr
after upgrading to 4.20.39-1.el7 sas multipath stopped being detected
I did a diff to the two files
Is the current behaviour the correct one or the previous one ?
I think sas multipath should also be detected no ?
[root@g1-car0136 etc]# diff multipath.conf multipath.conf.201809031555
1c1
< # VDSM REVISION 1.6
---
> # VDSM REVISION 1.5
101,109d100
< }
<
< # Whitelist FCP and iSCSI devices.
< blacklist {
< protocol ".*"
< }
<
< blacklist_exceptions {
< protocol "(scsi:fcp|scsi:iscsi)"
6 years, 2 months
ERROR: Malformed metadata for host 3
by info@linuxfabrik.ch
Hi,
we have a Hyperconverged oVirt Cluster with three machines. All three are running GlusterFS (Replica 3), and on two of them we are running the VMs (because they have better hardware).
Now we wanted to add a new oVirt host. Due to some network related errors, adding this host was unsuccessful, so we removed it from GUI and also via "hosted-engine --clean-metadata --host-id=3 --force-clean" (as it was still listed using "hosted-engine --vm-status" after removal from the GUI).
Inside the GUI and with "hosted-engine --vm-status", everything looks fine now, the third oVirt host is gone, and there are no problems at all. But in the /var/log/ovirt-hosted-engine-ha/agent.log, we now get every 15 minutes a "MainThread::ERROR::2018-09-05 10:20:17,190::hosted_engine::793::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(process_remote_metadata) Malformed metadata for host 3: received 0 of 512 expected bytes", on both of the remaining oVirt hosts.
Can someone explain what is wrong?
oVirt 4.2.2.6-1.el7.centos
Thank you
Markus
6 years, 2 months
qemu-kvm memory leak in 4.2.5
by Hesham Ahmed
Starting oVirt 4.2.4 (also in 4.2.5 and maybe in 4.2.3) I am facing
some sort of memory leak. The memory usage on the hosts keep
increasing till it reaches somewhere around 97%. Putting the host in
maintenance and back resolves it. The memory usage by the qemu-kvm
processes is way above the defined VM memory for instance below is the
memory usage of a VM:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
12271 qemu 20 0 35.4g 30.9g 8144 S 13.3 49.3
9433:15 qemu-kvm
The VM memory settings are:
Defined Memory: 8192 MB
Physical Memory Guaranteed: 5461 MB
This is for a 3 node hyperconverged cluster running on latest oVirt Node 4.2.5.
6 years, 2 months