new engine restore from backup
by ozmen62@hotmail.com
Hi,
I've built e new engine and tried to restore from backup
after run engine-setup it returns;
[ ERROR ] Failed to execute stage 'Misc configuration': function getdwhhistorytimekeepingbyvarname(unknown) does not exist
LINE 2: select * from GetDwhHistoryTimekeepingByVarName(
redhat page says seploy new engine, but already done it.
old and new engine are same version
Do you have any idea how can i restore engine.
There is no another backup
3 years, 6 months
How to assign a server disk and nic profile via REST API?
by ovirt.org@nevim.eu
Hello to everybody,
for about an hour I have stuck my nose in the ovirt and rhev documentation, but I still can't understand how to use the REST API to change the disk profile and nothing the server profile.
Ideally, how to set it for them when setting up a VM.
Thank you so much for the advice.
3 years, 6 months
Slow VM replication
by David Johnson
Hi everyone,
When cloning a VM, I discovered that the time to close appears excessive
based on the underlying platform I am using. It appears that the clone
operation is not making efficient use of the network. I have seen up to 4
GBits sustained throughput by applications on VM's on the cluster.
Is there a configuration I might be missing?
*System specifics:*
Backing store: NFS on TrueNAS running zRaid 3 on 11 spinning disks
Ovirt Controller: I7 desktop
General network: 1 GBit Ethernet
Ovirt Host: Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz, 2x32 cores, 256 GB
RAM
Data Network: 10 GBit 10GBase-Twinax, dedicated to this Host and the TrueNAS
*Operation:*
Copying the 60GB partition, the copy operation never exceeds 40 megabytes
per second (less than 0.5 GBit), even though the dedicated 10 Gigabit data
network is not otherwise busy.
[image: image.png]
*David Johnson*
*Director of Development, Maxis Technology*
844.696.2947 ext 702 (o) | 479.531.3590 (c)
<https://www.linkedin.com/in/pojoguy/>
<https://maxistechnology.com/wp-content/uploads/vcards/vcard-David_Johnson...>
<https://maxistechnology.com/>
*Follow us:* <https://www.linkedin.com/company/maxis-tech-inc/>
3 years, 6 months
Unable to migrate Engine to another HE Host
by Marko Vrgotic
Dear oVirt,
I have already reached out twice regarding the issues that occurred, due to power outage, but noticed only when upgrading engine to latest 4.3. version.
I am unable to redeploy engine on Host2, the hosted-engine file stays empty and VDSM on Hosts1 and 3 is reporting, even though I cleared the metadata for the Host2, on Host 1 and Host3:
2021-04-30 05:57:58,454-0700 ERROR (jsonrpc/7) [ovirt_hosted_engine_ha.client.client.HAClient] Malformed metadata for host 2: received 0 of 512 expected bytes (client:137)
Today I tried to migrate HE from Host 3 to Host 1 and it fails each time with following message:
On Engine:
2021-04-30 12:57:56,961Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1233892) [] EVENT_ID: VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: HostedEngine, Source: ovirt-sj-03.ictv.com, Destination: ovirt-sj-01.ictv.com).
On source Host:
2021-04-30 05:57:56,705-0700 ERROR (migsrc/66b6d489) [virt.vm] (vmId='66b6d489-ceb8-486a-951a-355e21f13627') Failed to migrate (migration:450)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 431, in _regular_run
time.time(), migrationParams, machineParams
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 505, in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 591, in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 525, in _perform_migration
self._migration_flags)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 100, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1781, in migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self)
libvirtError: operation aborted: migration out job: canceled by client
I know that this version is end of life – but I would very much appreciate if someone could help me asses if this means corruption in DB or the overall damage, simply to know how to plan further actions.
My impression was that I still had to functional HE Hosts in the pool, but after seeing migration failure, it’s pretty much down to single host.
This is production system, so I cannot just move on to upgrading/deploying to 4.4.
Additionally – :
* Is the effect of the engine-cleanup on HE Host local or it affects all HE Hosts? Could that help bringing the Host back to state so that HE can be re-deployed?
* What is the effect or reinitialize-lockspace?
Kindly awaiting your reply. Happy to provide any additional information needed.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
m: +31 (65) 5734174
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 6 months
how to override host in console.vv
by Enrico Becchetti
Hi all,
I use ovirt 4.3.10 with an engine running in a standalone virtual machine.
The operating system of this engine and the hypervisors is CentOS 7.
In the initial setup I chose a name for the engine, ovirt.local, then I
added
a second public IP address named ovirt.mydomain in order to allow the
user portal
also from the Internet.
Now here's my problem. Users are able to connect to the VM portal but
they cannot
access the console of their virtual machines.
Checking the console.vv file I noticed that there are two parts, the
first [virt-viewer]
which contains the hypervisor on which the virtual machine runs, and the
second in
this configuration [ovirt] inside ovirt I've got "host = ovirt.local: 443".
Now I would like to change the value of the "host" parameter with the FQDN.
Can you tell me how I can set an override of this variable ?
Thanks a lot.
Best Regards
Enrico
--
_______________________________________________________________________
Enrico Becchetti Servizio di Calcolo e Reti
Istituto Nazionale di Fisica Nucleare - Sezione di Perugia
Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY)
Phone:+39 075 5852777 Skype:enrico_becchetti
Mail: Enrico.Becchetti<at>pg.infn.it
______________________________________________________________________
3 years, 6 months
Re: Unable to provide network acces to second cluster in DC
by Miguel Garcia
Browsing at the path above I see prodvlan and havlan listed and manage network settings is the same as Public1.
Compute > Clusters > Public2 > Logical Network > Manage Networks
The problem is that when I try to create a vm for cluster Public2 on the nic sections no nic profile are listed, it remains <empty> option only.
3 years, 6 months
[ANN] oVirt 4.4.6 Sixth Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.6 Sixth Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.6
Sixth Release Candidate for testing, as of April 29th, 2021.
This update is the sixth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.6 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.6 (redeploy in case of already being on 4.4.6).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* CentOS Stream 8
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* CentOS Stream 8
* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available based on CentOS Stream 8
- oVirt Node NG is already available based on CentOS Stream 8
Additional Resources:
* Read more about the oVirt 4.4.6 release highlights:
http://www.ovirt.org/release/4.4.6/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.6/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 6 months
how to enable HA for all vms and entire cluster at global settings
by dhanaraj.ramesh@yahoo.com
Hi Team
instead of enabling HA for each vm one by one, I want to enable HA for all vms that is already running in the cluster and also for the future vms that going to be installed.
I already did configure HA reservation at the cluster level, any other settings or commands I should configure? how I can achieve the above
3 years, 6 months
Unable to provide network acces to second cluster in DC
by Miguel Garcia
I have created the following DC and cluster structure in my ovirt server
- Data Center Public
- - cluster-public1
- - cluster-public2
Also created a couple Logical Network for DC-Public
- - prodvlan
- - havlan
And added 4 host (two for each cluster because processor type) and each host has ports added to both logical networks (besides ovirtmgmt)
When I create a new vm in cluster public1 and add nics I'm able to list prodvlan and havlan but when I try to create the vm for cluster public2 no logical networks are listed.
Any idea how to get logical networks listed for cluster public2?
Thanks in advance
3 years, 6 months