[ANN] oVirt 4.4.3 Third Release Candidate is now available for testing
by Lev Veyde
oVirt 4.4.3 Third Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.3
Third Release Candidate for testing, as of October 1st, 2020.
This update is the third in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA should not require re-doing these steps, if
already performed while upgrading from 4.4.1 to 4.4.2 GA. These are only
required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.3 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.3 (redeploy in case of already being on 4.4.3).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.2 or newer
* CentOS Linux (or similar) 8.2 or newer
* oVirt Node 4.4 based on CentOS Linux 8.2 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.3 release highlights:
http://www.ovirt.org/release/4.4.3/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.3/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
4 years, 1 month
VM AutoStart
by Jeremey Wise
When I have to shut down cluster... ups runs out etc.. I need a sequence
set of just a small number of VMs to "autostart"
Normally I just use DNS FQND to connect to oVirt engine but as two of my
VMs are a DNS HA cluster.. as well as NTP / SMTP /DHCP etc... I need
those two infrastructure VMs to be auto boot.
I looked at HA settings for those VMs but it seems to be watching for pause
/resume.. but it does not imply or state auto start on clean first boot.
Options?
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 1 month
ovirt-node-4.4.2 grub is not reading new grub.cfg at boot
by Mike Lindsay
Hey Folks,
I've got a bit of a strange one here. I downloaded and installed
ovirt-node-ng-installer-4.4.2-2020091810.el8.iso today on an old dev
laptop and to get it to install I needed to add acpi=off to the kernel
boot param to get the installing to work (known issue with my old
laptop). After installation it was still booting with acpi=off, no
biggie (seen that happen with Centos 5,6,7 before on occasion) right,
just change the line in /etc/defaults/grub and run grub2-mkconfig (ran
for both efi and legacy for good measure even knowing EFI isn't used)
and reboot...done this hundreds of times without any problems.
But this time after rebooting if I hit 'e' to look at the kernel
params on boot, acpi=off is still there. Basically any changes to
/etc/default/grub are being ignored or over-ridden but I'll be damned
if I can't find where.
I know I'm missing something simple here, I do this all the time but
to be honest this is the first Centos 8 based install I've had time to
play with. Any suggestions would be greatly appreciated.
The drive layout is a bit weird but had no issues running fedora or
centos in the past. boot drive is a mSATA (/dev/sdb) and there is a
SSD data drive at /dev/sda...having sda installed or removed makes no
difference and /boot is mounted where it should /dev/sdb1....very
strange
Cheers,
Mike
[root@ovirt-node01 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX='crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet'
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
GRUB_DISABLE_OS_PROBER='true'
[root@ovirt-node01 ~]# cat /boot/grub2/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub2-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
set pager=1
if [ -f ${config_directory}/grubenv ]; then
load_env -f ${config_directory}/grubenv
elif [ -s $prefix/grubenv ]; then
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="${saved_entry}"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
terminal_output console
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=menu
set timeout=5
# Fallback normal timeout code in case the timeout_style feature is
# unavailable.
else
set timeout=5
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/00_tuned ###
set tuned_params=""
set tuned_initrd=""
### END /etc/grub.d/00_tuned ###
### BEGIN /etc/grub.d/01_users ###
if [ -f ${prefix}/user.cfg ]; then
source ${prefix}/user.cfg
if [ -n "${GRUB2_PASSWORD}" ]; then
set superusers="root"
export superusers
password_pbkdf2 root ${GRUB2_PASSWORD}
fi
fi
### END /etc/grub.d/01_users ###
### BEGIN /etc/grub.d/08_fallback_counting ###
insmod increment
# Check if boot_counter exists and boot_success=0 to activate this behaviour.
if [ -n "${boot_counter}" -a "${boot_success}" = "0" ]; then
# if countdown has ended, choose to boot rollback deployment,
# i.e. default=1 on OSTree-based systems.
if [ "${boot_counter}" = "0" -o "${boot_counter}" = "-1" ]; then
set default=1
set boot_counter=-1
# otherwise decrement boot_counter
else
decrement boot_counter
fi
save_env boot_counter
fi
### END /etc/grub.d/08_fallback_counting ###
### BEGIN /etc/grub.d/10_linux ###
insmod part_msdos
insmod ext2
set root='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
search --no-floppy --fs-uuid --set=root b6557c59-e11f-471b-8cb1-70c47b0b4b29
fi
insmod part_msdos
insmod ext2
set boot='hd1,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=boot --hint-bios=hd1,msdos1
--hint-efi=hd1,msdos1 --hint-baremetal=ahci1,msdos1
b6557c59-e11f-471b-8cb1-70c47b0b4b29
else
search --no-floppy --fs-uuid --set=boot b6557c59-e11f-471b-8cb1-70c47b0b4b29
fi
# This section was generated by a script. Do not modify the generated
file - all changes
# will be lost the next time file is regenerated. Instead edit the
BootLoaderSpec files.
#
# The blscfg command parses the BootLoaderSpec files stored in
/boot/loader/entries and
# populates the boot menu. Please refer to the Boot Loader
Specification documentation
# for the files format:
https://www.freedesktop.org/wiki/Specifications/BootLoaderSpec/.
set default_kernelopts="root=/dev/mapper/onn-ovirt--node--ng--4.4.2--0.20200918.0+1
ro crashkernel=auto resume=/dev/mapper/onn-swap
rd.lvm.lv=onn/ovirt-node-ng-4.4.2-0.20200918.0+1 rd.lvm.lv=onn/swap
noapic rhgb quiet "
insmod blscfg
blscfg
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/10_reset_boot_success ###
# Hiding the menu is ok if last boot was ok or if this is a first boot
attempt to boot the entry
if [ "${boot_success}" = "1" -o "${boot_indeterminate}" = "1" ]; then
set menu_hide_ok=1
else
set menu_hide_ok=0
fi
# Reset boot_indeterminate after a successful boot
if [ "${boot_success}" = "1" ] ; then
set boot_indeterminate=0
# Avoid boot_indeterminate causing the menu to be hidden more then once
elif [ "${boot_indeterminate}" = "1" ]; then
set boot_indeterminate=2
fi
# Reset boot_success for current boot
set boot_success=0
save_env boot_success boot_indeterminate
### END /etc/grub.d/10_reset_boot_success ###
### BEGIN /etc/grub.d/12_menu_auto_hide ###
if [ x$feature_timeout_style = xy ] ; then
if [ "${menu_show_once}" ]; then
unset menu_show_once
save_env menu_show_once
set timeout_style=menu
set timeout=60
elif [ "${menu_auto_hide}" -a "${menu_hide_ok}" = "1" ]; then
set orig_timeout_style=${timeout_style}
set orig_timeout=${timeout}
if [ "${fastboot}" = "1" ]; then
# timeout_style=menu + timeout=0 avoids the countdown code keypress check
set timeout_style=menu
set timeout=0
else
set timeout_style=hidden
set timeout=1
fi
fi
fi
### END /etc/grub.d/12_menu_auto_hide ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_ppc_terminfo ###
### END /etc/grub.d/20_ppc_terminfo ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
--
Mike Lindsay
Senior Systems Administrator
Technological Maintenance and Support
CBC/SRC
mike.lindsay(a)cbc.ca
(o) 416-205-8992
(c) 416-819-2841
4 years, 1 month
Re: CEPH - Opinions and ROI
by penguin pages
These are all storage rich servers.
Drives:
USB 3 64GB Boot / OS
512GB SSD (Gluster HCI: volumes "engine", "data" , "vmstore" "iso" I added last one to .. well to learn if I could extend with LVM ;)
1TB SSD: (VDO+Gluster Manual build due to brick and fqdn issues in oVirt, It did import once it was created so that is good.. )
1TB SSD/NVMe: (?? CEPH ??
Goal is I can learn technology and play.. but have several independent volumes where I can move important systems to / from / backup so if my playing around messes things up.. I have a fall back.
I would try RedHat Container Storage.. but It is a home lab so my budget is all used up on hardware and so CentOS. I am hoping oVirt had a similar setup process like " yum install -y gluster-ansible-roles " but for CEPH.
This video implies something of that ilk exists.. https://www.youtube.com/watch?v=wIw7RjHPhzs but <sigh> jumps right into setup.. and fails to mention "how did you get that plugin in cockpit"... and is their an "oVirt" version.
4 years, 1 month
java.lang.reflect.UndeclaredThrowableException - oVirt engine UI
by Jeremey Wise
I tried to post on website but .. it did not seem to work... so sorry if
this is double posting.
oVirt login this AM. accepted username and password but got java error.
Restarted oVirt engine
##
hosted-engine --set-maintenance --mode=global
hosted-engine --vm-shutdown
hosted-engine --vm-status
#make sure that the status is shutdown before restarting
hosted-engine --vm-start
hosted-engine --vm-status
#make sure the status is health before leaving maintenance mode
hosted-engine --set-maintenance --mode=none
##
[root@thor ~]# hosted-engine --vm-status
--== Host thor.penguinpages.local (id: 1) status ==--
Host ID : 1
Host timestamp : 65342
Score : 3400
Engine status : {"vm": "down", "health": "bad",
"detail": "unknown", "reason": "vm not running on this host"}
Hostname : thor.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 824c29fd
conf_on_shared_storage : True
local_conf_timestamp : 65342
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=65342 (Wed Sep 30 08:11:45 2020)
host-id=1
score=3400
vm_conf_refresh_time=65342 (Wed Sep 30 08:11:45 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host medusa.penguinpages.local (id: 3) status ==--
Host ID : 3
Host timestamp : 87556
Score : 3400
Engine status : {"vm": "up", "health": "good",
"detail": "Up"}
Hostname : medusa.penguinpages.local
Local maintenance : False
stopped : False
crc32 : 63296a70
conf_on_shared_storage : True
local_conf_timestamp : 87556
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=87556 (Wed Sep 30 08:11:39 2020)
host-id=3
score=3400
vm_conf_refresh_time=87556 (Wed Sep 30 08:11:39 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False
[root@thor ~]# yum update -y
Last metadata expiration check: 0:31:17 ago on Wed 30 Sep 2020 09:17:03 AM
EDT.
Dependencies resolved.
Nothing to do.
Complete!
[root@thor ~]#
Gogled around .. just found this thread.
##
https://bugzilla.redhat.com/show_bug.cgi?id=1378045
# pgadmin connect to ovirte01.penguinpages.com as engine to db engine
select mac_addr from vm_interface
"00:16:3e:57:0d:47"
"56:6f:86:41:00:01"
"56:6f:86:41:00:00"
"56:6f:86:41:00:02"
"56:6f:86:41:00:03"
"56:6f:86:41:00:04"
"56:6f:86:41:00:05"
"56:6f:86:41:00:15"
"56:6f:86:41:00:16"
"56:6f:86:41:00:17"
"56:6f:86:41:00:18"
"56:6f:86:41:00:19"
# Note one field is "null"
Question:
1) is this bad?
2) How do I fix?
3) Any idea on root cause?
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 1 month
Gluster Volumes - Correct Peer Connection
by Jeremey Wise
I just noticed when HCI setup bult the gluster engine / data / vmstore
volumes... it did use correctly the definition of 10Gb "back end"
interfaces / hosts.
But.. oVirt Engine is NOT referencing this.
it lists bricks as 1Gb "managment / host" interfaces. Is this a GUI
issue? I doubt this and how do I correct it?
### Data Volume Example
Name:
data
Volume ID:
0ae7b487-8b87-4192-bd30-621d445902fe
Volume Type:
Replicate
Replica Count:
3
Number of Bricks:
3
Transport Types:
TCP
Maximum no of snapshots:
256
Capacity:
999.51 GiB total, 269.02 GiB used, 730.49 GiB free, 297.91 GiB Guaranteed
free, 78 Deduplication/Compression savings (%)
medusa.penguinpages.local
medusa.penguinpages.local:/gluster_bricks/data/data
25%
OK
odin.penguinpages.local
odin.penguinpages.local:/gluster_bricks/data/data
25%
OK
thor.penguinpages.local
thor.penguinpages.local:/gluster_bricks/data/data
25%
OK
# I have storage back end of 172.16.101.x which is 10Gb dedicated for
replication. Peers reflect this
[root@odin c4918f28-00ce-49f9-91c8-224796a158b9]# gluster peer status
Number of Peers: 2
Hostname: thorst.penguinpages.local
Uuid: 7726b514-e7c3-4705-bbc9-5a90c8a966c9
State: Peer in Cluster (Connected)
Hostname: medusast.penguinpages.local
Uuid: 977b2c1d-36a8-4852-b953-f75850ac5031
State: Peer in Cluster (Connected)
[root@odin c4918f28-00ce-49f9-91c8-224796a158b9]#
--
p <jeremey.wise(a)gmail.com>enguinpages
4 years, 1 month