ovirt-engine and host certification is expired in ovirt4.0
by momokch@yahoo.com.hk
hello everyone,
my ovirt-engine and host certification is expired, is it any method no need to shutdown all the vm can enroll/update the certificate between engine and the host???
i am using the ovirt 4.0
thank you
3 years, 6 months
Upgrade Host Compatibility Version
by jb
Hello everybody,
for some days I upgrade our environment from ovirt 4.3 to 4.4.2 and now
I stuck on upgrading the host compatibility version. It show there a
compatibility until version 4.4. The VMs have a limit until version 4.5
and the cluster shows to that I can upgrade them until 4.5.
On the cluster page is also a Upgrade guide, but this fails when I try
that one. At the moment I have only one host to upgrade (no hosted
engine)...
Thanks for helping!
Jonathan
3 years, 6 months
oVirt 4.3.10 and ansible default timeouts
by Gianluca Cecchi
Hello,
in the past when I was in 4.3.7 I used this file:
/etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
with
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80
to bypass the default of 30 minutes at that time.
I updated in steps to 4.3.8 (in February), 4.3.9 (in April) and 4.3.10 (in
July).
Due to an error on my side I noticed that the task for which I extended the
ansible timeout (this task I didn't execute anymore in latest months) fails
with timeout after 80 minutes indeed.
With the intent to extend again the custom timeout I went to
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf, provided
by ovirt-engine-backend-4.3.10.4-1.el7.noarch
and actually I see this inside:
"
# Specify the ansible-playbook command execution timeout in minutes. It's
used for any task, which executes
# AnsibleExecutor class. To change the value permanentaly create a conf
file 99-ansible-playbook-timeout.conf in
# /etc/ovirt-engine/engine.conf.d/
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=120
"
and the file seems the original provided, not tampered:
[root@ovmgr1 test_backup]# rpm -qvV
ovirt-engine-backend-4.3.10.4-1.el7.noarch | grep virt-engine.conf$
......... /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf
[root@ovmgr1 test_backup]#
So the question is: has the default intended value passed to 120 in 4.3.10
(or in any version after 4.3.7)?
Thanks,
Gianluca
3 years, 6 months
CEPH - Opinions and ROI
by Jeremey Wise
I have for many years used gluster because..well. 3 nodes.. and so long as
I can pull a drive out.. I can get my data.. and with three copies.. I have
much higher chance of getting it.
Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid
MTBF issues ) and with VDO.. and thin provisioning.. not had issue.
BUT.... gluster seems to be falling out of favor. Especially as I move
towards OCP.
So.. CEPH. I have one SSD in each of the three servers. so I have some
space to play.
I googled around.. and find no clean deployment notes and guides on CEPH +
oVirt.
Comments or ideas..
--
p <jeremey.wise(a)gmail.com>enguinpages.
3 years, 6 months
Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way
by Kushagra Agarwal
I was hoping if i can get some help with the below oVirt scenario:
*Problem Statement*:-
Is it possible to change scheduler optimization settings of cluster using
ansible or some other automation way
*Description*:- Do we have any ansible module or any other CLI based
approach which can help us to change 'scheduler optimization' settings of
cluster in oVIrt. Scheduler optimization settings of cluster can be found
under Scheduling Policy tab ( Compute -> Clusters(select the cluster) ->
click on edit and then navigate to scheduling policy
Any help in this will be highly appreciated.
Thanks,
Kushagra
3 years, 6 months
Question mark VMs
by Vrgotic, Marko
Hi oVIrt wizards,
One of my LocalStorage hypervisors dies. VMs do not need to be rescued.
As expected I see them in WebUI with question mark state (picture below):
What is the step or are steps to clean ovirt engine db from these VMs and its Storage/Hypervisor/Cluster/DC?
[Table Description automatically generated]
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
3 years, 6 months
ovirt-engine and host certification is expired
by momokch@yahoo.com.hk
hello everyone,
my ovirt-engine and host certification is expired, is it any method no need to shutdown all the vm can enroll/update the certificate between engine and the host???
i am using the ovirt 4.0
thank you
3 years, 6 months
Re: CEPH - Opinions and ROI
by Philip Brown
ceph through an iscsi gateway is very.. very.. slow.
----- Original Message -----
From: "Matthew Stier" <Matthew.Stier(a)fujitsu.com>
To: "Jeremey Wise" <jeremey.wise(a)gmail.com>, "users" <users(a)ovirt.org>
Sent: Wednesday, September 30, 2020 10:03:34 PM
Subject: [ovirt-users] Re: CEPH - Opinions and ROI
If you can’t go direct, how about round about, with an iSCSI gateway.
From: Jeremey Wise <jeremey.wise(a)gmail.com>
Sent: Wednesday, September 30, 2020 11:33 PM
To: users <users(a)ovirt.org>
Subject: [ovirt-users] CEPH - Opinions and ROI
I have for many years used gluster because..well. 3 nodes.. and so long as I can pull a drive out.. I can get my data.. and with three copies.. I have much higher chance of getting it.
Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid MTBF issues ) and with VDO.. and thin provisioning.. not had issue.
BUT.... gluster seems to be falling out of favor. Especially as I move towards OCP.
So.. CEPH. I have one SSD in each of the three servers. so I have some space to play.
I googled around.. and find no clean deployment notes and guides on CEPH + oVirt.
Comments or ideas..
--
[ mailto:jeremey.wise@gmail.com | p ] enguinpages.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4V6NKC62LW...
3 years, 6 months
[ANN] oVirt 4.4.3 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Third Release Candidate for testing, as of October 1st, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
3 years, 6 months
VM AutoStart
by Jeremey Wise
When I have to shut down cluster... ups runs out etc.. I need a sequence
set of just a small number of VMs to "autostart"
Normally I just use DNS FQND to connect to oVirt engine but as two of my
VMs are a DNS HA cluster.. as well as NTP / SMTP /DHCP etc... I need
those two infrastructure VMs to be auto boot.
I looked at HA settings for those VMs but it seems to be watching for pause
/resume.. but it does not imply or state auto start on clean first boot.
Options?
--
p <jeremey.wise(a)gmail.com>enguinpages
3 years, 6 months