On Mon, Sep 19, 2022 at 12:15 PM David White <dmwhite823(a)protonmail.com> wrote:
------- Original Message -------
On Monday, September 19th, 2022 at 4:44 AM, Yedidyah Bar David <didi(a)redhat.com>
wrote:
> On Mon, Sep 19, 2022 at 11:31 AM David White dmwhite823(a)protonmail.com wrote:
>
> > Thank you.
> >
> > On the engine:
> >
> > [root@ovirt-engine1 dwhite]# rpm -qa | grep -i ansible-core
> > ansible-core-2.13.3-1.el8.x86_64
> >
> > So I downgraded ansible-core:
> > [root@ovirt-engine1 dwhite]# yum downgrade ansible-core
> >
> > [root@ovirt-engine1 dwhite]# rpm -qa | grep ansible-core
> > ansible-core-2.12.7-1.el8.x86_64
> >
> > After this, I tried again to deploy to the host, and that failed. The playbooks
got further. Reviewing the host-deploy log, it failed on:
> >
> > "task" : "Enable firewalld rules",
> > "task_path" :
"/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-firewalld/tasks/firewalld.yml:15",
> >
> > ... with the following failure:
> > "msg" : "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among
existing services Permanent and Non-Permanent(immediate) operation, Services are defined
by port/tcp relationship and named as they are in /etc/services (on most systems)",
> >
> > QUESTION:
> > Probably not the best, or most elegant solution, but for my use case, is there
something within the engine itself that I can (should) configure (maybe in the Postgres
database somewhere?) to tell it that I'm no longer using Gluster? I'm completely
off gluster now, so I'd prefer to not deploy it...
>
>
> I think it's a setting per DC/cluster, whether it supports gluster.
> Try editing your DCs/clusters.
>
> > Or is there a better way?
You're right. I went to Compute -> Clusters, clicked Edit, and in the General
section, I scrolled down, and there is a checkbox to enable (or disable) the gluster
service. That was enabled, so I just disabled it.
That said, it's interesting to me that I confirmed gluster was actually installed on
the new host - but yet no service was available. So it does appear to me that there's
a bug somewhere else:
[root@cha2-storage]# firewall-cmd --get-services | grep -i gluster
(That command produces nothing, yet glusterfs is installed:)
root@cha2-storage]# yum info glusterfs
Last metadata expiration check: 0:25:10 ago on Mon 19 Sep 2022 04:20:28 AM EDT.
Installed Packages
Name : glusterfs
Version : 10.2
Release : 1.el8s
Architecture : x86_64
Size : 2.6 M
Source : glusterfs-10.2-1.el8s.src.rpm
Repository : @System
From repo : centos-gluster10
As I wrote below,
>
> It might be enough to copy /usr/lib/firewalld/services/glusterfs.xml
> (in the rpm glusterfs-server)
it's in the glusterfs-server rpm, not glusterfs.
from some other machine and put it
> either there or in /etc/firewalld/services/ . I didn't test this. Not
> sure it's better :-).
Simply disabling the gluster service from the ovirt web UI as described above fixed the
issue.
Good. Thanks for the update.
Best regards,
Thank you.
> Best regards,
>
> > Sent with Proton Mail secure email.
> >
> > ------- Original Message -------
> > On Monday, September 19th, 2022 at 2:44 AM, Yedidyah Bar David didi(a)redhat.com
wrote:
> >
> > > Hi,
> >
> > > please see my reply to "[ovirt-users] Error during deployment of
ovirt-engine".
> >
> > > Best regards,
> >
> > > On Mon, Sep 19, 2022 at 5:02 AM David White via Users users(a)ovirt.org
wrote:
> >
> > > > I currently have a self-hosted engine that was restored from a backup
of an engine that was originally in a hyperconverged state. (See
https://lists.ovirt.org/archives/list/users@ovirt.org/message/APQ3XBUM34T...).
> >
> > > > This was also an upgrade from ovirt 4.4 to ovirt 4.5.
> >
> > > > There were 4 hosts in this cluster. Unfortunately, 2 of them are
completely in an "Unassigned" state right now, and I don't know why. The VMs
on those hosts are working fine, but I have no way to move the VMs or manage them.
> >
> > > > More to the point of this email:
> > > > I'm trying to re-deploy onto a 3rd host. I did a fresh install of
Rocky Linux 8, and followed the instructions at
https://ovirt.org/download/ and at
https://ovirt.org/download/install_on_rhel.html, including the part there that is specific
to Rocky.
> >
> > > > After installing the centos-release-ovirt45 package, I then logged
into the oVirt engine web UI, and went to Compute -> Hosts -> New, and have tried
(and failed) many times to install / deploy to this new host.
> >
> > > > The last error in the host deploy log is the following:
> >
> > > > 2022-09-18 21:29:39 EDT - {
> > > > "uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e",
> > > > "counter" : 404,
> > > > "stdout" : "fatal: [
cha2-storage.mgt.example.com]:
FAILED! => {\"msg\": \"The conditional check 'cluster_switch ==
\\\"ovs\\\" or (ovn_central is defined and ovn_central | ipaddr)' failed.
The error was: The ipaddr filter requires python's netaddr be installed on the ansible
controller\\n\\nThe error appears to be in
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may\\nbe elsewhere in the file depending on the exact syntax
problem.\\n\\nThe offending line appears to be:\\n\\n- block:\\n - name: Install ovs\\n ^
here\\n\"}",
> > > > "start_line" : 405,
> > > > "end_line" : 406,
> > > > "runner_ident" :
"e2cbd38d-64fa-4ecd-82c6-114420ea14a4",
> > > > "event" : "runner_on_failed",
> > > > "pid" : 65899,
> > > > "created" : "2022-09-19T01:29:38.983937",
> > > > "parent_uuid" :
"02113221-f1b3-920f-8bd4-00000000003d",
> > > > "event_data" : {
> > > > "playbook" : "ovirt-host-deploy.yml",
> > > > "playbook_uuid" :
"73a6e8f1-3836-49e1-82fd-5367b0bf4e90",
> > > > "play" : "all",
> > > > "play_uuid" :
"02113221-f1b3-920f-8bd4-000000000006",
> > > > "play_pattern" : "all",
> > > > "task" : "Install ovs",
> > > > "task_uuid" :
"02113221-f1b3-920f-8bd4-00000000003d",
> > > > "task_action" : "package",
> > > > "task_args" : "",
> > > > "task_path" :
"/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml:3",
> > > > "role" : "ovirt-provider-ovn-driver",
> > > > "host" : "cha2-storage.mgt.example.com",
> > > > "remote_addr" : "cha2-storage.mgt.example.com",
> > > > "res" : {
> > > > "msg" : "The conditional check 'cluster_switch ==
\"ovs\" or (ovn_central is defined and ovn_central | ipaddr)' failed. The
error was: The ipaddr filter requires python's netaddr be installed on the ansible
controller\n\nThe error appears to be in
'/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-provider-ovn-driver/tasks/configure.yml':
line 3, column 5, but may\nbe elsewhere in the file depending on the exact syntax
problem.\n\nThe offending line appears to be:\n\n- block:\n - name: Install ovs\n ^
here\n",
> > > > "_ansible_no_log" : false
> > > > },
> > > > "start" : "2022-09-19T01:29:38.919334",
> > > > "end" : "2022-09-19T01:29:38.983680",
> > > > "duration" : 0.064346,
> > > > "ignore_errors" : null,
> > > > "event_loop" : null,
> > > > "uuid" : "94b93e6a-5410-4d26-b058-d7d1db0a151e"
> > > > }
> > > > }
> >
> > > > On the engine, I have verified that netaddr is installed. And just
for kicks, I've installed as many different versions as I can find:
> >
> > > > [root@ovirt-engine1 host-deploy]# rpm -qa | grep netaddr
> > > > python38-netaddr-0.7.19-8.1.1.el8.noarch
> > > > python2-netaddr-0.7.19-8.1.1.el8.noarch
> > > > python3-netaddr-0.7.19-8.1.1.el8.noarch
> >
> > > > The engine is based on CentOS Stream 8 (when I moved the engine out
of the hyperconverged environment, my goal was to keep things as close to the original
environment as possible)
> > > > [root@ovirt-engine1 host-deploy]# cat /etc/redhat-release
> > > > CentOS Stream release 8
> >
> > > > The engine is fully up-to-date:
> > > > [root@ovirt-engine1 host-deploy]# uname -a
> > > > Linux
ovirt-engine1.mgt.barredowlweb.com 4.18.0-408.el8.x86_64 #1 SMP
Mon Jul 18 17:42:52 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
> >
> > > > And the engine has the following repos:
> > > > [root@ovirt-engine1 host-deploy]# yum repolist
> > > > repo id repo name
> > > > appstream CentOS Stream 8 - AppStream
> > > > baseos CentOS Stream 8 - BaseOS
> > > > centos-ceph-pacific CentOS-8-stream - Ceph Pacific
> > > > centos-gluster10 CentOS-8-stream - Gluster 10
> > > > centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitch
> > > > centos-opstools CentOS-OpsTools - collectd
> > > > centos-ovirt45 CentOS Stream 8 - oVirt 4.5
> > > > extras CentOS Stream 8 - Extras
> > > > extras-common CentOS Stream 8 - Extras common packages
> > > > ovirt-45-centos-stream-openstack-yoga CentOS Stream 8 - oVirt 4.5 -
OpenStack Yoga Repository
> > > > ovirt-45-upstream oVirt upstream for CentOS Stream 8 - oVirt 4.5
> > > > powertools CentOS Stream 8 - PowerTools
> >
> > > > Why does deploying to this new Rocky host keep failing?
> >
> > > > Sent with Proton Mail secure email.
> >
> > > > _______________________________________________
> > > > Users mailing list -- users(a)ovirt.org
> > > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZAUCM3GB5E...
> >
> > > --
> > > Didi
> > > _______________________________________________
> > > Users mailing list -- users(a)ovirt.org
> > > To unsubscribe send an email to users-leave(a)ovirt.org
> > > Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> > > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> > > List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CC7LUC6DVAW...
>
>
>
>
> --
> Didi
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NMXWDBFQ6FM...
--
Didi