oVirt 4.3.10 and ansible default timeouts
by Gianluca Cecchi
Hello,
in the past when I was in 4.3.7 I used this file:
/etc/ovirt-engine/engine.conf.d/99-ansible-playbook-timeout.conf
with
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=80
to bypass the default of 30 minutes at that time.
I updated in steps to 4.3.8 (in February), 4.3.9 (in April) and 4.3.10 (in
July).
Due to an error on my side I noticed that the task for which I extended the
ansible timeout (this task I didn't execute anymore in latest months) fails
with timeout after 80 minutes indeed.
With the intent to extend again the custom timeout I went to
/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf, provided
by ovirt-engine-backend-4.3.10.4-1.el7.noarch
and actually I see this inside:
"
# Specify the ansible-playbook command execution timeout in minutes. It's
used for any task, which executes
# AnsibleExecutor class. To change the value permanentaly create a conf
file 99-ansible-playbook-timeout.conf in
# /etc/ovirt-engine/engine.conf.d/
ANSIBLE_PLAYBOOK_EXEC_DEFAULT_TIMEOUT=120
"
and the file seems the original provided, not tampered:
[root@ovmgr1 test_backup]# rpm -qvV
ovirt-engine-backend-4.3.10.4-1.el7.noarch | grep virt-engine.conf$
......... /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf
[root@ovmgr1 test_backup]#
So the question is: has the default intended value passed to 120 in 4.3.10
(or in any version after 4.3.7)?
Thanks,
Gianluca
2 years, 11 months
CEPH - Opinions and ROI
by Jeremey Wise
I have for many years used gluster because..well. 3 nodes.. and so long as
I can pull a drive out.. I can get my data.. and with three copies.. I have
much higher chance of getting it.
Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid
MTBF issues ) and with VDO.. and thin provisioning.. not had issue.
BUT.... gluster seems to be falling out of favor. Especially as I move
towards OCP.
So.. CEPH. I have one SSD in each of the three servers. so I have some
space to play.
I googled around.. and find no clean deployment notes and guides on CEPH +
oVirt.
Comments or ideas..
--
p <jeremey.wise(a)gmail.com>enguinpages.
2 years, 11 months
Is it possible to change scheduler optimization settings of cluster using ansible or some other automation way
by Kushagra Agarwal
I was hoping if i can get some help with the below oVirt scenario:
*Problem Statement*:-
Is it possible to change scheduler optimization settings of cluster using
ansible or some other automation way
*Description*:- Do we have any ansible module or any other CLI based
approach which can help us to change 'scheduler optimization' settings of
cluster in oVIrt. Scheduler optimization settings of cluster can be found
under Scheduling Policy tab ( Compute -> Clusters(select the cluster) ->
click on edit and then navigate to scheduling policy
Any help in this will be highly appreciated.
Thanks,
Kushagra
2 years, 11 months
Question mark VMs
by Vrgotic, Marko
Hi oVIrt wizards,
One of my LocalStorage hypervisors dies. VMs do not need to be rescued.
As expected I see them in WebUI with question mark state (picture below):
What is the step or are steps to clean ovirt engine db from these VMs and its Storage/Hypervisor/Cluster/DC?
[Table Description automatically generated]
Kindly awaiting your reply.
-----
kind regards/met vriendelijke groeten
Marko Vrgotic
Sr. System Engineer @ System Administration
ActiveVideo
o: +31 (35) 6774131
e: m.vrgotic(a)activevideo.com<mailto:m.vrgotic@activevideo.com>
w: www.activevideo.com<http://www.activevideo.com>
ActiveVideo Networks BV. Mediacentrum 3745 Joop van den Endeplein 1.1217 WJ Hilversum, The Netherlands. The information contained in this message may be legally privileged and confidential. It is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or ActiveVideo Networks, LLC by telephone at +1 408.931.9200 and delete or destroy any copy of this message.
2 years, 11 months
ovirt-engine and host certification is expired
by momokch@yahoo.com.hk
hello everyone,
my ovirt-engine and host certification is expired, is it any method no need to shutdown all the vm can enroll/update the certificate between engine and the host???
i am using the ovirt 4.0
thank you
2 years, 11 months
Re: CEPH - Opinions and ROI
by Philip Brown
ceph through an iscsi gateway is very.. very.. slow.
----- Original Message -----
From: "Matthew Stier" <Matthew.Stier(a)fujitsu.com>
To: "Jeremey Wise" <jeremey.wise(a)gmail.com>, "users" <users(a)ovirt.org>
Sent: Wednesday, September 30, 2020 10:03:34 PM
Subject: [ovirt-users] Re: CEPH - Opinions and ROI
If you can’t go direct, how about round about, with an iSCSI gateway.
From: Jeremey Wise <jeremey.wise(a)gmail.com>
Sent: Wednesday, September 30, 2020 11:33 PM
To: users <users(a)ovirt.org>
Subject: [ovirt-users] CEPH - Opinions and ROI
I have for many years used gluster because..well. 3 nodes.. and so long as I can pull a drive out.. I can get my data.. and with three copies.. I have much higher chance of getting it.
Downsides to gluster: Slower (its my home..meh... and I have SSD to avoid MTBF issues ) and with VDO.. and thin provisioning.. not had issue.
BUT.... gluster seems to be falling out of favor. Especially as I move towards OCP.
So.. CEPH. I have one SSD in each of the three servers. so I have some space to play.
I googled around.. and find no clean deployment notes and guides on CEPH + oVirt.
Comments or ideas..
--
[ mailto:jeremey.wise@gmail.com | p ] enguinpages.
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/S4V6NKC62LW...
2 years, 12 months
[ANN] oVirt 4.4.3 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Third Release Candidate for testing, as of October 1st, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 12 months
VM AutoStart
by Jeremey Wise
When I have to shut down cluster... ups runs out etc.. I need a sequence
set of just a small number of VMs to "autostart"
Normally I just use DNS FQND to connect to oVirt engine but as two of my
VMs are a DNS HA cluster.. as well as NTP / SMTP /DHCP etc... I need
those two infrastructure VMs to be auto boot.
I looked at HA settings for those VMs but it seems to be watching for pause
/resume.. but it does not imply or state auto start on clean first boot.
Options?
--
p <jeremey.wise(a)gmail.com>enguinpages
2 years, 12 months
ovirt-node-4.4.2 grub is not reading new grub.cfg at boot
by Mike Lindsay
Hey Folks,
I've got a bit of a strange one here. I downloaded and installed
ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
laptop and to get it to install I needed to add acpi=off to the kernel
boot param to get the installing to work (known issue with my old
laptop). After installation it was still booting with acpi=off, no
biggie (seen that happen with Centos 5,6,7 before on occasion) right,
just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
for both efi and legacy for good measure even knowing EFI isn't used)
and reboot...done this hundreds of times without any problems.
But this time after rebooting if I hit 'e' to look at the kernel
params on boot, acpi=off is still there. Basically any changes to
/etc/default/grub are being ignored or over-ridden but I'll be damned
if I can't find where.
I know I'm missing something simple here, I do this all the time but
to be honest this is the first Centos 8 based install I've had time to
play with. Any suggestions would be greatly appreciated.
The drive layout is a bit weird but had no issues running fedora or
centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
SSD data drive at /dev/sda...having sda installed or removed makes no
difference and /boot is mounted where it should /dev/sdb1....very
strange
Cheers,
Mike
[root@ovirt-node01 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet'
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_DISABLE_OS_PROBER='true'
[root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
set pager=1
if [ -f ${config_directory}/grubenv ]; then
load_env -f ${config_directory}/grubenv
elif [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="${saved_entry}"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
terminal_output console
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###
### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
source ${prefix}/user.cfg
if [ -n "${GRUB2_PASSWORD}" ]; then
set superusers="root"
export superusers
password_pbkdf2 root ${GRUB2_PASSWORD}
fi
fi
### END /etc/grub.d/01_users ###
### BEGIN /etc/grub.d/08_fallback_counting ###
insmod increment
# Check if boot_counter exists and boot_success=0 to activate this behaviour.
if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
# if countdown has ended, choose to boot rollback deployment,
# i.e. default=1 on OSTree-based systems.
if [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
set default=1
set boot_counter=-1
# otherwise decrement boot_counter
else
decrement boot_counter
fi
save_env boot_counter
fi
### END /etc/grub.d/08_fallback_counting ###
### BEGIN /etc/grub.d/10_linux ###
insmod part_msdos
insmod ext2
set root='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
search --no-floppy --fs-uuid --set=root b6557c59-e11f-471b-8cb1-70c47b0b4b29
fi
insmod part_msdos
insmod ext2
set boot='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=boot --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
search --no-floppy --fs-uuid --set=boot b6557c59-e11f-471b-8cb1-70c47b0b4b29
fi
# This section was generated by a script. Do not modify the generated
file - all changes
# will be lost the next time file is regenerated. Instead edit the
BootLoaderSpec files.
#
# The blscfg command parses the BootLoaderSpec files stored in
/boot/loader/entries and
# populates the boot menu. Please refer to the Boot Loader
Specification documentation
# for the files format:
https://www.freedesktop.org/wiki/Specifications/BootLoaderSpec/.
set default_kernelopts="root=/dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
ro crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet "
insmod blscfg
blscfg
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/10_reset_boot_success ###
# Hiding the menu is ok if last boot was ok or if this is a first boot
attempt to boot the entry
if [ "${boot_success}" = "1" -o "${boot_indeterminate}" = "1" ]; then
set menu_hide_ok=1
else
set menu_hide_ok=0
fi
# Reset boot_indeterminate after a successful boot
if [ "${boot_success}" = "1" ] ; then
set boot_indeterminate=0
# Avoid boot_indeterminate causing the menu to be hidden more then once
elif [ "${boot_indeterminate}" = "1" ]; then
set boot_indeterminate=2
fi
# Reset boot_success for current boot
set boot_success=0
save_env boot_success boot_indeterminate
### END /etc/grub.d/10_reset_boot_success ###
### BEGIN /etc/grub.d/12_menu_auto_hide ###
if [ x$feature_timeout_style = xy ] ; then
if [ "${menu_show_once}" ]; then
unset menu_show_once
save_env menu_show_once
set timeout_style=menu
set timeout=60
elif [ "${menu_auto_hide}" -a "${menu_hide_ok}" = "1" ]; then
set orig_timeout_style=${timeout_style}
set orig_timeout=${timeout}
if [ "${fastboot}" = "1" ]; then
# timeout_style=menu + timeout=0 avoids the countdown code keypress check
set timeout_style=menu
set timeout=0
else
set timeout_style=hidden
set timeout=1
fi
fi
fi
### END /etc/grub.d/12_menu_auto_hide ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_ppc_terminfo ###
### END /etc/grub.d/20_ppc_terminfo ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
--
Mike Lindsay
Senior Systems Administrator
Technological Maintenance and Support
CBC/SRC
mike.lindsay(a)cbc.ca
(o) 416-205-8992
(c) 416-819-2841
2 years, 12 months