Re: Ovirt host update bug
by Ales Musil
Hi,
you need to specify also "rdo-ovn-host", "python3-rdo-openvswitch" and "
rdo-ovn-central" in the excluded.
See
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RIHO32QA3NT...
Best regards,
Ales
On Wed, Oct 19, 2022 at 12:35 PM Brett Maton <matonb(a)ltresources.co.uk>
wrote:
> I'm seeing the same error
>
> Repo config:
>
> # grep 'ovirt-45-centos-stream-openstack-yoga'
> /etc/yum.repos.d/CentOS-oVirt-4.5.repo -B1 -A15
>
> [ovirt-45-centos-stream-openstack-yoga]
> name=CentOS Stream $releasever - oVirt 4.5 - OpenStack Yoga Repository
> # baseurl=
> http://mirror.centos.org/centos/$stream/cloud/$basearch/openstack-yoga/
> mirrorlist=
> http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=cloud-o...
> gpgcheck=1
> gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Cloud
> enabled=1
> module_hotfixes=1
> exclude=
> # ansible-2.9.27-4.el8 shipped in yoga repo is breaking dependencies on
> oVirt side
> ansible
> ansible-test
> * rdo-openvswitch*
> * rdo-ovn*
>
>
> Update attempt:
>
> # yum clean all
> 187 files removed
>
> # dnf update
> CentOS-8-stream - Ceph Pacific
> 781 kB/s | 456 kB 00:00
> CentOS-8-stream - Gluster 10
> 175 kB/s | 40 kB 00:00
> CentOS-8 - NFV OpenvSwitch
> 364 kB/s | 168 kB 00:00
> CentOS-OpsTools - collectd
> 169 kB/s | 41 kB 00:00
> CentOS Stream 8 - AppStream
> 24 MB/s | 25 MB 00:01
> CentOS Stream 8 - BaseOS
> 23 MB/s | 25 MB 00:01
> CentOS Stream 8 - Extras
> 39 kB/s | 18 kB 00:00
> CentOS Stream 8 - Extras common packages
> 24 kB/s | 4.9 kB 00:00
> CentOS Stream 8 - PowerTools
> 9.7 MB/s | 5.1 MB 00:00
> CentOS Stream 8 - oVirt 4.5
> 4.1 MB/s | 1.2 MB 00:00
> CentOS Stream 8 - oVirt 4.5 - OpenStack Yoga Repository
> 3.4 MB/s | 2.2 MB 00:00
> oVirt upstream for CentOS Stream 8 - oVirt 4.5
> 47 kB/s | 408 kB 00:08
> Extra Packages for Enterprise Linux 8 - x86_64
> 11 MB/s | 13 MB 00:01
> Extra Packages for Enterprise Linux Modular 8 - x86_64
> 830 kB/s | 733 kB 00:00
> Extra Packages for Enterprise Linux 8 - Next - x86_64
> 1.5 MB/s | 1.4 MB 00:00
> Error:
> Problem 1: package rdo-ovn-central-2:22.06-3.el8.noarch requires rdo-ovn
> = 2:22.06-3.el8, but none of the providers can be installed
> - cannot install the best update candidate for package
> ovn-2021-central-21.12.0-82.el8s.x86_64
> - package rdo-ovn-2:22.06-3.el8.noarch is filtered out by exclude
> filtering
> Problem 2: package python3-rdo-openvswitch-2:2.17-3.el8.noarch requires
> rdo-openvswitch = 2:2.17-3.el8, but none of the providers can be installed
> - cannot install the best update candidate for package
> python3-openvswitch2.15-2.15.0-119.el8s.x86_64
> - package rdo-openvswitch-2:2.17-3.el8.noarch is filtered out by exclude
> filtering
> (try to add '--skip-broken' to skip uninstallable packages or '--nobest'
> to use not only best candidate packages)
>
>
> Regards,
> Brett
> ------------------------------
> *From:* Lev Veyde <lveyde(a)redhat.com>
> *Sent:* 19 October 2022 11:14
> *To:* mmoon(a)maxistechnology.com <mmoon(a)maxistechnology.com>
> *Cc:* users(a)ovirt.org <users(a)ovirt.org>
> *Subject:* [ovirt-users] Re: Ovirt host update bug
>
> Checked with the networking and looks like the issue is with the
> conflicting OVS/OVN packages released on the OpenStack channel.
>
> Fixing that on our side will require releasing a new version, but one can
> try to fix it manually by modifying the
> /etc/yum.repos.d/CentOS-oVirt-4.5.repo file.
>
> 1. Find the [ovirt-45-centos-stream-openstack-yoga] section
> 2. At the end of the section look for ansible-test under exclude=
> 3. Add *rdo-openvswitch* and *rdo-ovn* each on their own line, in same
> way as *ansible* and *ansible-test* that already exists
>
>
>
> On Wed, Oct 19, 2022 at 1:49 AM <mmoon(a)maxistechnology.com> wrote:
>
> Hey I'm having an issue, curious if anyone can help
>
> I'm trying to update my ovirt cluster from 4.5.2.4-1.el8 to 4.5.3.1 but
> have run into a problem with the update installer.
>
> The environment is:
>
> Static hostname: ovirt2.xxx.xxx
> Icon name: computer-desktop
> Chassis: desktop
> Machine ID: 0eb1fcff65214fb399c9d2ffaf1f5a29
> Boot ID: dbc7438e4d464209ac79452410cf60e7
> Operating System: CentOS Stream 8
> CPE OS Name: cpe:/o:centos:centos:8
> Kernel: Linux 4.18.0-408.el8.x86_64
> Architecture: x86-64
>
>
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> devtmpfs 8023804 0 8023804 0% /dev
> tmpfs 8055520 24 8055496 1% /dev/shm
> tmpfs 8055520 99708 7955812 2% /run
> tmpfs 8055520 0 8055520 0% /sys/fs/cgroup
> /dev/mapper/cs-root 73364480 11401568 61962912 16% /
> /dev/mapper/cs-home 166691304 1467260 165224044 1% /home
> /dev/sda2 1038336 262972 775364 26% /boot
> /dev/sda1 613184 7416 605768 2% /boot/efi
> tmpfs 1611104 12 1611092 1% /run/user/42
> tmpfs 1611104 4 1611100 1% /run/user/1000
>
>
> There are 3 hosts, which can all detect and begin the update, getting most
> of the way through it, before failing and returning it to a state of
> non-operation. The log file says that the host is unable to resolve the
> virtual switch dependency
> log excerpt:
> "stdout" : "fatal: [192.168.2.18]: FAILED! => {\"changed\": false,
> \"failures\": [], \"msg\": \"Depsolve Error occurred: \\n Problem 1:
> package ovirt-openvswitch-2.15-4.el8.noarch requires openvswitch2.15, but
> none of the providers can be installed\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-117.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-106.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-110.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-115.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-119.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2
> .17 provided by openvswitch2.15-2.15.0-22.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-23.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-24.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-27.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-30.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-32.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-35.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-37.el8s.x86_64\\n - packag
> e rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-39.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-41.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-47.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-48.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-51.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-52.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-53.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 <
> 2.17 provided by openvswitch2.15-2.15.0-54.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-56.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-6.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-72.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-75.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-80.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-81.el8s.x86_64\\n - package
> rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17
> provided by openvswitch2.15-2.15.0-88.el8s.x86_64\\n - cannot
> install the best update candidate for package
> ovirt-openvswitch-2.15-4.el8.noarch\\n - cannot install the best update
> candidate for package openvswitch2.15-2.15.0-117.el8s.x86_64\\n Problem 2:
> package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes
> python3-openvswitch2.15 < 2.17 provided by
> python3-openvswitch2.15-2.15.0-119.el8s.x86_64\\n - package
> openvswitch2.15-ipsec-2.15.0-119.el8s.x86_64 requires
> python3-openvswitch2.15 = 2.15.0-119.el8s, but none of the providers can be
> installed\\n - cannot install the best update candidate for package
> python3-openvswitch2.15-2.15.0-117.el8s.x86_64\\n - cannot install the
> best update candidate for package
> openvswitch2.15-ipsec-2.15.0-117.el8s.x86_64\\n Problem 3: package
> ovirt-openvswitch-ovn-common-2.15-4.el8.noarch requires ovn-2021, but none
> of the providers can be installed\\n - package
> rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by
> ovn-2021-21.12.0-82.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarc
> h obsoletes ovn-2021 < 22.06 provided by
> ovn-2021-21.03.0-21.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch
> obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-40.el8s.x86_64\\n
> - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided
> by ovn-2021-21.06.0-17.el8s.x86_64\\n - package
> rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by
> ovn-2021-21.06.0-29.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch
> obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-11.el8s.x86_64\\n
> - cannot install the best update candidate for package
> ovn-2021-21.12.0-82.el8s.x86_64\\n - cannot install the best update
> candidate for package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch\\n
> Problem 4: package ovirt-openvswitch-ovn-host-2.15-4.el8.noarch requires
> ovn-2021-host, but none of the providers can be installed\\n - package
> rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided
> by ovn-2021-host-21.12.0-82.el8s.x86_64\\n - pa
> ckage rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06
> provided by ovn-2021-host-21.03.0-21.el8s.x86_64\\n - package
> rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided
> by ovn-2021-host-21.03.0-40.el8s.x86_64\\n - package
> rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided
> by ovn-2021-host-21.06.0-17.el8s.x86_64\\n - package
> rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided
> by ovn-2021-host-21.06.0-29.el8s.x86_64\\n - package
> rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided
> by ovn-2021-host-21.12.0-11.el8s.x86_64\\n - cannot install the best
> update candidate for package ovn-2021-host-21.12.0-82.el8s.x86_64\\n -
> cannot install the best update candidate for package
> ovirt-openvswitch-ovn-host-2.15-4.el8.noarch\", \"rc\": 1, \"results\":
> []}",
>
> On one host, when I attempt to open the ovirt web console from the gui it
> won't open, and virtual machines on that particular host are unable to run
> an ovirt web console as well: citing handshake error failure.
> log excerpt for host web console:
>
> Oct 17 15:58:29 ovirt-host-05 journal[96215]: Domain id=24
> name='cen-79-dmz-02' uuid=82fefcfa-bce0-4397-a575-48d3d08fdb61 is tainted:
> custom-ga-command
> Oct 17 15:58:29 ovirt-host-05 journal[96215]: Domain id=25
> name='win-10-utl' uuid=11f71942-1d88-40a0-a6c5-45e7718afbcf is tainted:
> custom-ga-command
>
> Oct 17 03:37:01 ovirt-host-05 ovs-appctl[37436]:
> ovs|00001|unixctl|WARN|failed to connect to
> /var/run/ovn/ovn-controller.18617.ctl
>
> Thank you in advance
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5USJX6G7ZAZ...
>
>
>
> --
>
> Lev Veyde
>
> Senior Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> lev(a)redhat.com | lveyde(a)redhat.com
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/HM7YY77DZQI...
>
--
Ales Musil
Senior Software Engineer - OVN Core
Red Hat EMEA <https://www.redhat.com>
amusil(a)redhat.com IM: amusil
<https://red.ht/sig>
2 years, 2 months
Ovirt host update bug
by mmoon@maxistechnology.com
Hey I'm having an issue, curious if anyone can help
I'm trying to update my ovirt cluster from 4.5.2.4-1.el8 to 4.5.3.1 but have run into a problem with the update installer.
The environment is:
Static hostname: ovirt2.xxx.xxx
Icon name: computer-desktop
Chassis: desktop
Machine ID: 0eb1fcff65214fb399c9d2ffaf1f5a29
Boot ID: dbc7438e4d464209ac79452410cf60e7
Operating System: CentOS Stream 8
CPE OS Name: cpe:/o:centos:centos:8
Kernel: Linux 4.18.0-408.el8.x86_64
Architecture: x86-64
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 8023804 0 8023804 0% /dev
tmpfs 8055520 24 8055496 1% /dev/shm
tmpfs 8055520 99708 7955812 2% /run
tmpfs 8055520 0 8055520 0% /sys/fs/cgroup
/dev/mapper/cs-root 73364480 11401568 61962912 16% /
/dev/mapper/cs-home 166691304 1467260 165224044 1% /home
/dev/sda2 1038336 262972 775364 26% /boot
/dev/sda1 613184 7416 605768 2% /boot/efi
tmpfs 1611104 12 1611092 1% /run/user/42
tmpfs 1611104 4 1611100 1% /run/user/1000
There are 3 hosts, which can all detect and begin the update, getting most of the way through it, before failing and returning it to a state of non-operation. The log file says that the host is unable to resolve the virtual switch dependency
log excerpt:
"stdout" : "fatal: [192.168.2.18]: FAILED! => {\"changed\": false, \"failures\": [], \"msg\": \"Depsolve Error occurred: \\n Problem 1: package ovirt-openvswitch-2.15-4.el8.noarch requires openvswitch2.15, but none of the providers can be installed\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-117.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-106.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-110.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-115.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-119.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2
.17 provided by openvswitch2.15-2.15.0-22.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-23.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-24.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-27.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-30.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-32.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-35.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-37.el8s.x86_64\\n - packag
e rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-39.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-41.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-47.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-48.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-51.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-52.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-53.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 <
2.17 provided by openvswitch2.15-2.15.0-54.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-56.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-6.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-72.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-75.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-80.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-81.el8s.x86_64\\n - package rdo-openvswitch-2:2.17-3.el8.noarch obsoletes openvswitch2.15 < 2.17 provided by openvswitch2.15-2.15.0-88.el8s.x86_64\\n - cannot
install the best update candidate for package ovirt-openvswitch-2.15-4.el8.noarch\\n - cannot install the best update candidate for package openvswitch2.15-2.15.0-117.el8s.x86_64\\n Problem 2: package python3-rdo-openvswitch-2:2.17-3.el8.noarch obsoletes python3-openvswitch2.15 < 2.17 provided by python3-openvswitch2.15-2.15.0-119.el8s.x86_64\\n - package openvswitch2.15-ipsec-2.15.0-119.el8s.x86_64 requires python3-openvswitch2.15 = 2.15.0-119.el8s, but none of the providers can be installed\\n - cannot install the best update candidate for package python3-openvswitch2.15-2.15.0-117.el8s.x86_64\\n - cannot install the best update candidate for package openvswitch2.15-ipsec-2.15.0-117.el8s.x86_64\\n Problem 3: package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch requires ovn-2021, but none of the providers can be installed\\n - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-82.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarc
h obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-21.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.03.0-40.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-17.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.06.0-29.el8s.x86_64\\n - package rdo-ovn-2:22.06-3.el8.noarch obsoletes ovn-2021 < 22.06 provided by ovn-2021-21.12.0-11.el8s.x86_64\\n - cannot install the best update candidate for package ovn-2021-21.12.0-82.el8s.x86_64\\n - cannot install the best update candidate for package ovirt-openvswitch-ovn-common-2.15-4.el8.noarch\\n Problem 4: package ovirt-openvswitch-ovn-host-2.15-4.el8.noarch requires ovn-2021-host, but none of the providers can be installed\\n - package rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.12.0-82.el8s.x86_64\\n - pa
ckage rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.03.0-21.el8s.x86_64\\n - package rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.03.0-40.el8s.x86_64\\n - package rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.06.0-17.el8s.x86_64\\n - package rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.06.0-29.el8s.x86_64\\n - package rdo-ovn-host-2:22.06-3.el8.noarch obsoletes ovn-2021-host < 22.06 provided by ovn-2021-host-21.12.0-11.el8s.x86_64\\n - cannot install the best update candidate for package ovn-2021-host-21.12.0-82.el8s.x86_64\\n - cannot install the best update candidate for package ovirt-openvswitch-ovn-host-2.15-4.el8.noarch\", \"rc\": 1, \"results\": []}",
On one host, when I attempt to open the ovirt web console from the gui it won't open, and virtual machines on that particular host are unable to run an ovirt web console as well: citing handshake error failure.
log excerpt for host web console:
Oct 17 15:58:29 ovirt-host-05 journal[96215]: Domain id=24 name='cen-79-dmz-02' uuid=82fefcfa-bce0-4397-a575-48d3d08fdb61 is tainted: custom-ga-command
Oct 17 15:58:29 ovirt-host-05 journal[96215]: Domain id=25 name='win-10-utl' uuid=11f71942-1d88-40a0-a6c5-45e7718afbcf is tainted: custom-ga-command
Oct 17 03:37:01 ovirt-host-05 ovs-appctl[37436]: ovs|00001|unixctl|WARN|failed to connect to /var/run/ovn/ovn-controller.18617.ctl
Thank you in advance
2 years, 2 months
hosted-engine-setup --deploy fail on Centos Stream 8
by andrea.crisanti@uniroma1.it
Hy,
I am trying to install ovirt 4.5 on a 4-host cluster running Centos Stream 8, but the engine does not start and the whole process fails.
Here is my procedure
dnf install centos-release-ovirt45
dnf module reset virt
dnf module enable virt:rhel
dnf install ovirt-engine-appliance
dnf install ovirt-hosted-engine-setup
The latest version of ansible [ansible-core 2.13] uses python3.9 and the installation fails because some python3.9 modules are missing
[python39-netaddr, python39-jmespath] and cannot be installed [conflict python3-jmespath]. So I downgraded ansible to ansible-core 2.12
dnf downgrade ansible-core
Now
hosted-engine-setup --deploy --4
goes proceed further but stops because it cannot start the engine
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for the host to be up]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, please check logs, perhaps also on the engine machine"}
I looked into the log file
/var/log//ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-bootstrap_local_vm-20221007132728-yp7cd1.log
and I found the following error:
2022-10-07 13:28:30,881+0200 ERROR ansible failed {
"ansible_host": "localhost",
"ansible_playbook": "/usr/share/ovirt-hosted-engine-setup/he_ansible/trigger_role.yml",
"ansible_result": {
"_ansible_no_log": false,
"changed": false,
"cmd": [
"virsh",
"net-undefine",
"default"
],
"delta": "0:00:00.039258",
"end": "2022-10-07 13:28:30.710401",
"invocation": {
"module_args": {
"_raw_params": "virsh net-undefine default",
"_uses_shell": false,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2022-10-07 13:28:30.671143",
"stderr": "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'",
"stderr_lines": [
"error: failed to get network 'default'",
"error: Network not found: no network with matching name 'default'"
],
"stdout": "",
"stdout_lines": []
},
"ansible_task": "Update libvirt default network configuration, undefine",
"ansible_type": "task",
"status": "FAILED",
"task_duration": 0
}
Needless to say
firewalld and libvirtd are both up
and virsh net-list gives:
Name State Autostart Persistent
------------------------------------------------
;vdsmdummy; active no no
default active no yes
I googled around without success.
Has anyone had similar problems?
End of past July I installed Ovirt on another cluster running Centos Stream 8 following the procedure I just described with no problem.
If needed I can post all log files.
Thanks for the help.
Best
Andrea
2 years, 2 months
Get Vlan IDs from multiple VMs
by chry3@hotmail.com
Hello,
I´m exrtracting diferent VM informations from diferent VMs that are deployed on a server that I need to proccess later. I already have informations such as IPs, macs cpu cores etc, but I would like to get the VLAN id from each of the VMs. How can I do that? How can I get the network profile that my VM is using?
Kind Regards.
2 years, 2 months
Adding documentation - Migrating from a self-hosted engine to a standalone
by David White
Hi. I'm working on adding instructions to the documentation for how to migrate from a self-hosted engine to a stand alone manager. This will be the first time I've contributed anything to this project (or any larger open source project for that matter), so before I get too far in the weeds, I wanted to run this by the community so as not to waste my time if this is a bad idea.
My general high level documentation is pasted at the bottom of this email. These are the steps that I took when I did my own migration.
(Note to self: Need to add a step at the end to log into the Manager and disable the Gluster service by going to Computer --> Clusters --> (Edit the cluster) --> Uncheck the "Enable Gluster Service" checkbox.
And I've forked the ovirt-site repo and started working on the documentation here: https://github.com/dmwhite823/ovirt-site/tree/migrate-engine-to-standalone
I still have a ways to go before I'm ready to request a PR, but I'm open to any & all feedback. I think that I'm done with most of the changes necessary to ovirt-site/source/documentation/migrating_from_a_self-hosted_engine_to_a_standalone_manager/index.adoc, but I'm unclear why there's also a master.adoc file.
Note that I copied files into the new directory of migrating_from_a_self-hosted_engine_to_a_standalone_manager from the already existing migrating_from_a_standalone_manager_to_a_self-hosted_engine structure directory, and am editing the files in the new directory.
Will this be useful / helpful? Would others on the team like to contribute to improving these instructions prior to issuing a PR to the ovirt-site git repo? Am I even doing this right? 🤣
Thanks,
David
High level overview of steps required (pasted below):
Pre-req: Make sure VMs are not using HA lease of a gluster domain
1) Migrate all storage off Gluster
2) Remove all gluster volumes from oVirt
3) Put cluster into global maintenance
hosted-engine --set-maintenance --mode=global
4) On new VM:
Install CentOS Stream & add ovirt repos
dnf install centos-release-ovirt45
dnf module enable javapackages-tools pki-deps postgresql:12 mod_auth_openidc:2.3 nodejs:14
Stop & Disable the engine
# systemctl stop ovirt-engine
# systemctl disable ovirt-engine
Setup DNS in /etc/hosts if you don't have local DNS servers
Backup the engine
# engine-backup --mode=backup --file=file_name --log=log_file_name
Restore
# engine-backup --mode=restore --file=engine-backup-09172022-1 --log=restore --restore-permissions
Run engine-setup
# engine-setup
2 years, 2 months
oVirt 4.5.3 is now generally available
by Lev Veyde
The oVirt project is excited to announce the general availability of oVirt
4.5.3, as of October 18th, 2022.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics on top of oVirt 4.4.
Important notes before you install / upgrade
Some of the features included in oVirt 4.5.3 require content that is
available in RHEL 8.6 (or newer) and derivatives.
NOTE: If you’re going to install oVirt 4.5.3 on RHEL or similar, please
read Installing on RHEL or derivatives
<https://ovirt.org/download/install_on_rhel.html> first.
Documentation
Be sure to follow instructions for oVirt 4.5!
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.5.3 Release?
This release is available now on x86_64 architecture for:
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release supports Hypervisor Hosts on x86_64:
-
oVirt Node NG (based on CentOS Stream 8)
-
CentOS Stream 8
-
RHEL 8.6 and derivatives
This release also supports Hypervisor Hosts on x86_64 as tech preview
without secure boot:
-
CentOS Stream 9
-
RHEL 9.0 and derivatives
-
oVirt Node NG based on CentOS Stream 9
Builds are also available for ppc64le and aarch64.
Known issues:
-
On EL9 with UEFI secure boot, vdsm fails to decode DMI data due to
Bug 2081648 <https://bugzilla.redhat.com/show_bug.cgi?id=2081648> -
python-dmidecode module fails to decode DMI data
Security fixes included in oVirt 4.5.3 compared to latest oVirt 4.5.2:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the RFEs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
Some of the Bugs with high user impact are listed below:
Bug list
<https://bugzilla.redhat.com/buglist.cgi?quicksearch=target_milestone%3Aov...>
oVirt Node will be released shortly after the release will reach the CentOS
mirrors.
See the release notes for installation instructions and a list of new
features and bugs fixed.
Additional resources:
-
Read more about the oVirt 4.5.3 release highlights:
https://www.ovirt.org/release/4.5.3/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
--
Lev Veyde
Senior Software Engineer, RHCE | RHCVA | MCITP
Red Hat Israel
<https://www.redhat.com>
lev(a)redhat.com | lveyde(a)redhat.com
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
2 years, 2 months
Network Interface Already In USe - Self-Hosted Install
by Matthew J Black
Hi Guys & Girls,
<begin_rant>
OK, so I am really, *really* starting to get fed up with this. I know this is probably my fault, but even if it is then the oVirt documentation isn't helping in any way (being... "less than clear").
What I would really like is instead of having to rely on the "black box" that is Ansible, what I'd like is a simple set of clear cut instructions, Step-By-Step, so that we actually *know* what was going on when attempting to do a Self-Hosted install. After all, oVirt's "competition" doesn't make things so difficult...
<end_rant>
Now that I've got that on my chest, I'm trying to do a straight forward Self-Hosted Install. I've followed the instructions in the oVirt doco pretty much to the letter, and I'm still having problems.
My (pre-install) set-up:
- A freshly installed server (oVirt_Node_1) running Rocky Linux 8.6 with 3 NICs - NIC_1, NIC_2, & NIC_3.
- There are three VLANs - VLAN_A (172.16.1.0/24), VLAN_B (172.16.2.0/24), & VLAN_C (172.16.3.0/24).
- NIC_1 & NIC_2 are formed into a bond (bond_1).
- bond_1 is an 802.3ad bond.
- bond_1 has 2 sub-interfaces - bond_1.a & bond_1.b
- Interface bond_1.a in in VLAN_A.
- Interface bond_1.b is in VLAN_B.
- NIC_3 is sitting in VLAN_C.
- VLAN_A is the everyday "working" VLAN where the rest of the servers all sit (ie DNS Servers, Local Repository Server, etc, etc, etc), and where the oVirt Engine (OVE) will sit.
- VLAN B is for data throughput to and from the Ceph iSCSI Gateways in our Ceph Storage Cluster. This is a dedicated isolated VLAN with no gateway (ie only the oVirt Hosting Nodes and the Ceph iSCSI Gateways are on this VLAN).
- VLAN C is for OOB management traffic. This is a dedicated isolated VLAN with no gateway.
Everything is working. Everything can ping properly back and forth within the individual VLANs and VLAN_A can ping out to the Internet via its gateway (172.16.1.1).
Because we don't require iSCSI connectivity for the OVE (its on a working local Gluster TSP volume) the iSCSI hasn't *yet* been implemented.
After trying to do the install using our Local Repository Mirror (after discovering and mirroring all the required repositories), I gave up on that because for a "one-off" install it wasn't worth the time and effort it was taking, especially when it "seems" that the Ansible playbook wants the "original" repositories anyway - but that's another rant/issue.
So, I'm using all the original repositories as per the oVirt doco, including the special instructions for Rocky Linux and RHEL-derivatives in general, and using the defaults for the answers to the deployment script (except where there are no defaults) - and now I've got the following error:
~~~
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
[ ERROR ] Failed to execute stage 'Closing up': Failed getting local_vm_dir
~~~
The relevant lines from the log file (at least I think these are the relevant lines):
~~~
2022-10-04 16:41:35,712+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, undefine]
2022-10-04 16:41:37,017+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", 'rc': 1, 'cmd': ['virsh', 'net-undefine', 'default'], 'start': '2022-10-04 16:41:35.806251', 'end': '2022-10-04 16:41:36.839780', 'delta': '0:00:01.033529', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-undefine default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], '_ansible_no_log': False}
2022-10-04 16:41:37,118+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 ignored: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-undefine", "default"], "delta": "0:00:01.033529", "end": "2022-10-04 16:41:36.839780", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:35.806251", "stderr": "error: failed to get network 'default'\nerror: Network not found: no network with matching name 'default'", "stderr_lines": ["error: failed to get network 'default'", "error: Network not found: no network with matching name 'default'"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:37,219+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Update libvirt default network configuration, define]
2022-10-04 16:41:38,421+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 ok: [localhost]
2022-10-04 16:41:38,522+1100 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:115 TASK [ovirt.ovirt.hosted_engine_setup : Activate default libvirt network]
2022-10-04 16:41:38,823+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 {'changed': False, 'stdout': '', 'stderr': 'error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a', 'rc': 1, 'cmd': ['virsh', 'net-start', 'default'], 'start': '2022-10-04 16:41:38.571482', 'end': '2022-10-04 16:41:38.603454', 'delta': '0:00:00.031972', 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'virsh net-start default', '_uses_shell': False, 'warn': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ['error: Failed to start network default', 'error: internal error: Network is already in use by interface bond_1.a'], '_ansible_no_log': False}
2022-10-04 16:41:38,924+1100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:113 fatal: [localhost]: FAILED! => {"changed": false, "cmd": ["virsh", "net-start", "default"], "delta": "0:00:00.031972", "end": "2022-10-04 16:41:38.603454", "msg": "non-zero return code", "rc": 1, "start": "2022-10-04 16:41:38.571482", "stderr": "error: Failed to start network default\nerror: internal error: Network is already in use by interface bond_1.a", "stderr_lines": ["error: Failed to start network default", "error: internal error: Network is already in use by interface bond_1.a"], "stdout": "", "stdout_lines": []}
2022-10-04 16:41:39,125+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 PLAY RECAP [localhost] : ok: 106 changed: 32 unreachable: 0 skipped: 61 failed: 1
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:226 ansible-playbook rc: 2
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:233 ansible-playbook stdout:
2022-10-04 16:41:39,226+1100 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:236 ansible-playbook stderr:
2022-10-04 16:41:39,226+1100 DEBUG otopi.plugins.gr_he_ansiblesetup.core.misc misc._closeup:475 {'otopi_host_net': {'ansible_facts': {'otopi_host_net': ['ens0p1', 'bond_1.a', 'bond_1.b']}, '_ansible_no_log': False, 'changed': False}, 'ansible-playbook_rc': 2}
2022-10-04 16:41:39,226+1100 DEBUG otopi.context context._executeMethod:145 method exception
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 485, in _closeup
raise RuntimeError(_('Failed getting local_vm_dir'))
RuntimeError: Failed getting local_vm_dir
2022-10-04 16:41:39,227+1100 ERROR otopi.context context._executeMethod:154 Failed to execute stage 'Closing up': Failed getting local_vm_dir
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:765 ENVIRONMENT DUMP - BEGIN
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/error=bool:'True'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:775 ENV BASE/exceptionInfo=list:'[(<class 'RuntimeError'>, RuntimeError('Failed getting local_vm_dir',), <traceback object at 0x7f5210013088>)]'
2022-10-04 16:41:39,228+1100 DEBUG otopi.context context.dumpEnvironment:779 ENVIRONMENT DUMP - END
~~~
So, would someone please help me in getting this sorted - I mean, how are we supposed to do this install if the interface we need to connect to the box in the first place can't be used because it's "already in use"?
Cheers
Dulux-Oz
2 years, 2 months
Hyperconverged install fails to add second and third hosts
by Calvin Ellison
Hello fellow users, I'm having trouble sending up a brand new cluster using
Equinix Metal. The three servers are their "n3.xlarge.x86" model, which
uses an Intel Xeon Gold 6314U CPU in a Supermicro SSG-110P-NTR10-EI018
server.
The entire Hyperconverged installation process appears to complete without
error, but when I log into the manager only one host is listed and only
that host's Gluster brick appears in the UI. The only hint of a problem in
the UI is in the Tasks pane: two failed tasks to add the other hosts.
Where do I get started troubleshooting?
Calvin Ellison
Systems Architect
calvin.ellison(a)voxox.com
+1 (213) 285-0555
<http://voxox.com>
<https://www.facebook.com/VOXOX/> <https://www.instagram.com/voxoxofficial/>
<https://www.linkedin.com/company/3573541/admin/>
<https://twitter.com/Voxox>
The information contained herein is confidential and privileged information
or work product intended only for the individual or entity to whom it is
addressed. Any unauthorized use, distribution, or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify me immediately.
2 years, 2 months
install ovirt on maas provider
by Charles Kozler
Hello - I am attempting to install ovirt hosted engine (engine running as a
VM in a cluster). I have configured 10 servers at a metal-as-a-service
provider. The server wiring has been configured to our specifications,
however, the MaaS provider requires bond0 to be setup previously and the
two interface VLAN's preconfigured on the OS. I am not new to setting up
ovirt though my experience has changed every installation and this is a new
type of deployment for me (maas+ovirt) so I have some questions.
bond0.2034 = Uplink to WAN switch
bond0.3071 = Uplink to LAN internal switch (will need to become ovirtmgmt)
The gluster storage links are separate and not in scope for this
conversation right now as I am just trying to establish my first host and
engine.
Historically, ovirt has been a bit aggressive in how it sets up its
networks at install (in my experience) and when any network came
preconfigured it would typically drop it entirely which then means I lose
my SSH as ovirt decided what to do.
I am still TBD on whether or not I have drac/oob access to these systems so
lets assume for now I do not.
That all being said I am looking to see if anyone can tell me what to
expect with my current configuration. I cannot risk running the hosted
engine install and having ansible drop my network without my first
confirming OOB availability so any suggestions
2 years, 2 months
Error during deployment of ovirt-engine
by Jonas
Hello all
I'm trying to deploy an oVirt Engine through the cockpit interface.
Unfortunately the deployment fails with the following error:
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set admin username]
[ INFO ] ok: [localhost -> 192.168.222.95]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Wait for ovirt-engine service to start]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Expose engine VM webui over a local port via ssh port forwarding]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Evaluate temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Display the temporary bootstrap engine VM URL]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detect VLAN ID]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set Engine public key as authorized key without validating the TLS/SSL certificates]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Obtain SSO token using username/password credentials]
[ ERROR ] ovirtsdk4.AuthError: Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials.
[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Error during SSO authentication access_denied : Cannot authenticate user Invalid user credentials."}
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost]
I can login just fine to the VM over SSH but when I try to login over
the web interface as admin, the password is not accepted. I tried both
complex and simple passwords for root/admin but none worked so far.
This previous discussion did not solve my problems:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/HMLCEG2LPSWF...
Thank you,
Jonas
2 years, 2 months