
Fresh install of minimal CentOS8 Then deploy: - EPEL - Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm Install all nodes: - cockpit-ovirt-dashboard - gluster-ansible-roles - vdsm-gluster - ovirt-host - ovirt-ansible-roles - ovirt-ansible-infra Install on "first node of cluster" - ovirt-engine-appliance Now each node is stuck with same package conflict error: (and this blocks GUI "upgrades") [root@medusa ~]# yum update Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST. Error: Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64 - cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64 - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch - cannot install the best update candidate for package cockpit-system-217-1.el8.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root@medusa ~]# yum update --allowerasing Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST. Dependencies resolved. ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Upgrading: cockpit-bridge x86_64 238.1-1.el8 baseos 535 k cockpit-system noarch 238.1-1.el8 baseos 3.4 M replacing cockpit-dashboard.noarch 217-1.el8 Removing dependent packages: cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M Transaction Summary ========================================================================================================================================================================================================================================= Upgrade 2 Packages Remove 3 Packages ## Initially I assumed this was a path I was taking that was not standard.. but now I think this is some ovirt vs CentOS package repo issue. Any work arounds or root cause to fix this from repo conflict?

Il 3/10/21 3:56 PM, penguin pages ha scritto:
Fresh install of minimal CentOS8
Then deploy: - EPEL - Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
Install all nodes: - cockpit-ovirt-dashboard - gluster-ansible-roles - vdsm-gluster - ovirt-host - ovirt-ansible-roles - ovirt-ansible-infra
Install on "first node of cluster" - ovirt-engine-appliance
Now each node is stuck with same package conflict error: (and this blocks GUI "upgrades")
[root@medusa ~]# yum update Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST. Error: Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64 - cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64 - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch - cannot install the best update candidate for package cockpit-system-217-1.el8.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root@medusa ~]# yum update --allowerasing Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST. Dependencies resolved. ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Upgrading: cockpit-bridge x86_64 238.1-1.el8 baseos 535 k cockpit-system noarch 238.1-1.el8 baseos 3.4 M replacing cockpit-dashboard.noarch 217-1.el8 Removing dependent packages: cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M
Transaction Summary ========================================================================================================================================================================================================================================= Upgrade 2 Packages Remove 3 Packages
##
Initially I assumed this was a path I was taking that was not standard.. but now I think this is some ovirt vs CentOS package repo issue. Any work arounds or root cause to fix this from repo conflict? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OBGQPMLYC3YEYA...
I had a similar iussue.. my "solution" was to add: exclude=cockpit-* in the BaseOS repo definition file in /etc/yum.repos.d/ directory. So all the cockpit-* packages are now coming from the Ovirt repo. Hope this helps -- gb PGP Key: http://pgp.mit.edu/ Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34

On Wed, Mar 10, 2021 at 4:57 PM penguin pages <jeremey.wise@gmail.com> wrote:
Fresh install of minimal CentOS8
Then deploy: - EPEL - Add ovirt repo https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
Install all nodes: - cockpit-ovirt-dashboard - gluster-ansible-roles - vdsm-gluster - ovirt-host - ovirt-ansible-roles - ovirt-ansible-infra
Install on "first node of cluster" - ovirt-engine-appliance
Now each node is stuck with same package conflict error: (and this blocks GUI "upgrades")
[root@medusa ~]# yum update Last metadata expiration check: 0:55:35 ago on Wed 10 Mar 2021 08:14:22 AM EST. Error: Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-bridge-238.1-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64 - cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64 - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-238.1-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch - cannot install the best update candidate for package cockpit-system-217-1.el8.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root@medusa ~]# yum update --allowerasing Last metadata expiration check: 0:55:56 ago on Wed 10 Mar 2021 08:14:22 AM EST. Dependencies resolved. ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Upgrading: cockpit-bridge x86_64 238.1-1.el8 baseos 535 k cockpit-system noarch 238.1-1.el8 baseos 3.4 M replacing cockpit-dashboard.noarch 217-1.el8 Removing dependent packages: cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M
Transaction Summary ========================================================================================================================================================================================================================================= Upgrade 2 Packages Remove 3 Packages
##
Initially I assumed this was a path I was taking that was not standard.. but now I think this is some ovirt vs CentOS package repo issue. Any work arounds or root cause to fix this from repo conflict?
Hi, I think you are the one that already reported this bug, no? https://bugzilla.redhat.com/show_bug.cgi?id=1917011 Should be fixed in 4.4.5. You can try to workaround by installing ovirt-host from master snapshot or 4.4-pre repos. ovirt-host itself has no content, it's just a list of dependencies. So if you only take this one, and the rest from 4.4, you should be fine. Best regards, -- Didi

I did make that post but that was more about convert to CentOS 8 to streams fubar my cluster up... ya.. still trying to get it back on its feet. I have been trying to move to IaC based deployment but .. kind of given up on that as oVirt seems to really need its last steps "HCI Wizard" yum install ovirt-hosted-engine-setup # what I wish it would spit out ansible playbook so I could copy this over and run it as a playbook. Same for sub wizard of "gluster" This was sort of posted https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZTLI55VFCFSK3F... Issue I have some of the cluster working but until I can trust it is stable, can deploy and maintain VM, I don't want to move it into production to take VMs.

well.. figured the package remove was means to get rid of "upgrade pending" which would then allow me to get engine failover to start working.... but... ya.. don't do that. How to destroy engine: 1) yum update --allowerasing 2) reboot 3) no more engine starting. https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... Validated services look ok [root@thor ~]# systemctl status ovirt-ha-proxy Unit ovirt-ha-proxy.service could not be found. [root@thor ~]# systemctl status ovirt-ha-agent ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-10 14:55:17 EST; 14min ago Main PID: 6390 (ovirt-ha-agent) Tasks: 2 (limit: 1080501) Memory: 25.8M CGroup: /system.slice/ovirt-ha-agent.service └─6390 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. [root@thor ~]# systemctl status -l ovirt-ha-agent ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-10 14:55:17 EST; 16min ago Main PID: 6390 (ovirt-ha-agent) Tasks: 2 (limit: 1080501) Memory: 25.6M CGroup: /system.slice/ovirt-ha-agent.service └─6390 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. [root@thor ~]#journalctl -u ovirt-ha-agent -- Logs begin at Wed 2021-03-10 14:47:34 EST, end at Wed 2021-03-10 15:12:12 EST. -- Mar 10 14:48:35 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor response = self._proxy.start_monitor(type, options) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request verbose=self.__verbose File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request http_conn = self.send_request(host, handler, request_body, verbose) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request self.send_content(connection, request_body) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content connection.endheaders(request_body) File "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1040, in _send_output self.send(msg) File "/usr/lib64/python3.6/http/client.py", line 978, in send self.connect() File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect self.sock.connect(base64.b16decode(self.host)) FileNotFoundError: [Errno 2] No such file or directory [root@thor ~]# tail /var/log/messages error rotating in /var/log/messages but I think this is just some form of "engine is fubar. "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/lib64/python3.6/socket.py", line 724, in create_connection#012 raise err#012 File "/usr/lib64/python3.6/socket.py", line 713, in create_connection#012 sock.connect(sa)#012ConnectionRefusedError: [Errno 111] Connection refused Mar 10 15:08:59 thor journal[1454]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 111] Connection refused#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/notifications.py", line 29, in send_email#012 timeout=float(cfg["smtp-timeout"]))#012 File "/usr/lib64/python3.6/smtplib.py", line 251, in __init__#012 (code, msg) = self.connect(host, port)#012 File "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/lib64/python3.6/socket.py", line 724, in create_connection#012 raise err#012 File "/usr/lib64/python3.6/socket.py", line 713, in create_connection#012 sock.connect(sa)#012ConnectionRefusedError: [Errno 111] Connection refused Mar 10 15:08:59 thor journal[1454]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 111] Connection refused#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/notifications.py", line 29, in send_email#012 timeout=float(cfg["smtp-timeout"]))#012 File "/usr/lib64/python3.6/smtplib.py", line 251, in __init__#012 (code, msg) = self.connect(host, port)#012 File "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/ I guess I get to re-deploy.. again.

On Wed, Mar 10, 2021 at 10:16 PM penguin pages <jeremey.wise@gmail.com> wrote:
well.. figured the package remove was means to get rid of "upgrade pending" which would then allow me to get engine failover to start working.... but... ya.. don't do that.
If you refer to "Use --allowerasing without fully understanding what's going to be erased", then I definitely agree - don't do that.
How to destroy engine: 1) yum update --allowerasing
What did it remove? If this includes vdsm, it will definitely prevent starting the engine vm.
2) reboot 3) no more engine starting. https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm...
Validated services look ok [root@thor ~]# systemctl status ovirt-ha-proxy Unit ovirt-ha-proxy.service could not be found. [root@thor ~]# systemctl status ovirt-ha-agent ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-10 14:55:17 EST; 14min ago Main PID: 6390 (ovirt-ha-agent) Tasks: 2 (limit: 1080501) Memory: 25.8M CGroup: /system.slice/ovirt-ha-agent.service └─6390 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. [root@thor ~]# systemctl status -l ovirt-ha-agent ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2021-03-10 14:55:17 EST; 16min ago Main PID: 6390 (ovirt-ha-agent) Tasks: 2 (limit: 1080501) Memory: 25.6M CGroup: /system.slice/ovirt-ha-agent.service └─6390 /usr/libexec/platform-python /usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent
Mar 10 14:55:17 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. [root@thor ~]#journalctl -u ovirt-ha-agent
-- Logs begin at Wed 2021-03-10 14:47:34 EST, end at Wed 2021-03-10 15:12:12 EST. -- Mar 10 14:48:35 thor.penguinpages.local systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Mar 10 14:48:37 thor.penguinpages.local ovirt-ha-agent[3463]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor
I think this is while trying to connect to ovirt-ha-broker, you might want to check the status of that one.
response = self._proxy.start_monitor(type, options) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__ return self.__send(self.__name, args) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request verbose=self.__verbose File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request http_conn = self.send_request(host, handler, request_body, verbose) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request self.send_content(connection, request_body) File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content connection.endheaders(request_body) File "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib64/python3.6/http/client.py", line 1040, in _send_output self.send(msg) File "/usr/lib64/python3.6/http/client.py", line 978, in send self.connect() File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect self.sock.connect(base64.b16decode(self.host)) FileNotFoundError: [Errno 2] No such file or directory [root@thor ~]# tail /var/log/messages
error rotating in /var/log/messages but I think this is just some form of "engine is fubar. "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/lib64/python3.6/socket.py", line 724, in create_connection#012 raise err#012 File "/usr/lib64/python3.6/socket.py", line 713, in create_connection#012 sock.connect(sa)#012ConnectionRefusedError: [Errno 111] Connection refused Mar 10 15:08:59 thor journal[1454]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 111] Connection refused#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/notifications.py", line 29, in send_email#012 timeout=float(cfg["smtp-timeout"]))#012 File "/usr/lib64/python3.6/smtplib.py", line 251, in __init__#012 (code, msg) = self.connect(host, port)#012 File "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/lib64/python3.6/socket.py", line 724, in create_connection#012 raise err#012 File "/usr/lib64/python3.6/socket.py", line 713, in create_connection#012 sock.connect(sa)#012ConnectionRefusedError: [Errno 111] Connection refused Mar 10 15:08:59 thor journal[1454]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.notifications.Notifications ERROR [Errno 111] Connection refused#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/notifications.py", line 29, in send_email#012 timeout=float(cfg["smtp-timeout"]))#012 File "/usr/lib64/python3.6/smtplib.py", line 251, in __init__#012 (code, msg) = self.connect(host, port)#012 File "/usr/lib64/python3.6/smtplib.py", line 336, in connect#012 self.sock = self._get_socket(host, port, self.timeout)#012 File "/usr/lib64/python3.6/smtplib.py", line 307, in _get_socket#012 self.source_address)#012 File "/usr/
I think this is while it's trying to email a notification (about the failure?). Can be ignored, in itself - probably your sendmail is down.
I guess I get to re-deploy.. again.
Good luck and best regards, -- Didi

On Wed, Mar 10, 2021 at 7:19 PM penguin pages <jeremey.wise@gmail.com> wrote:
I did make that post but that was more about convert to CentOS 8 to streams fubar my cluster up... ya.. still trying to get it back on its feet.
I have been trying to move to IaC based deployment but .. kind of given up on that as oVirt seems to really need its last steps "HCI Wizard"
yum install ovirt-hosted-engine-setup
This is just a wrapper above a set of ansible playbooks/roles. See also: https://github.com/oVirt/ovirt-ansible-collection/ and specifically: https://github.com/oVirt/ovirt-ansible-collection/blob/master/roles/hosted_e... We also used to have code that used this directly in ovirt-system-tests. But it was broken for a long time and eventually removed. Might be revived one day, one can always hope: https://gerrit.ovirt.org/c/ovirt-system-tests/+/113217 Best regards, -- Didi
participants (3)
-
Giorgio Biacchi
-
penguin pages
-
Yedidyah Bar David