oVirt Engine no longer Starting

3 node cluster. Gluster for shared storage. CentOS8 Updated to CentOS 8 Streams :P -> https://bugzilla.redhat.com/show_bug.cgi?id=1911910 After several weeks .. I am really in need of direction to get this fixed. I saw several postings about oVirt package issues but not found a fix. [root@thor ~]# dnf update Last metadata expiration check: 2:54:29 ago on Fri 15 Jan 2021 06:49:16 AM EST. Error: Problem 1: package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-bridge-234-1.el8.x86_64 conflicts with cockpit-dashboard < 233 provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-host-4.4.1-4.el8.x86_64 - cannot install the best update candidate for package cockpit-bridge-217-1.el8.x86_64 Problem 2: problem with installed package ovirt-host-4.4.1-4.el8.x86_64 - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package cockpit-dashboard-217-1.el8.noarch Problem 3: package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch requires ovirt-host >= 4.4.0, but none of the providers can be installed - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best update candidate for package ovirt-hosted-engine-setup-2.4.9-1.el8.noarch - cannot install the best update candidate for package cockpit-system-217-1.el8.noarch (try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root@thor ~]# yum install cockpit-dashboard --nobest Last metadata expiration check: 2:54:52 ago on Fri 15 Jan 2021 06:49:16 AM EST. Package cockpit-dashboard-217-1.el8.noarch is already installed. Dependencies resolved. Problem: problem with installed package ovirt-host-4.4.1-4.el8.x86_64 - package ovirt-host-4.4.1-4.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-1.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-2.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package ovirt-host-4.4.1-3.el8.x86_64 requires cockpit-dashboard, but none of the providers can be installed - package cockpit-system-234-1.el8.noarch obsoletes cockpit-dashboard provided by cockpit-dashboard-217-1.el8.noarch - cannot install the best candidate for the job ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Skipping packages with broken dependencies: ovirt-host x86_64 4.4.1-1.el8 ovirt-4.4 13 k ovirt-host x86_64 4.4.1-2.el8 ovirt-4.4 13 k ovirt-host x86_64 4.4.1-3.el8 ovirt-4.4 13 k Transaction Summary ========================================================================================================================================================================================================================================= Skip 3 Packages Nothing to do. Complete! [root@thor ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─WDC_WDS100T2B0B-00YS70_19106A802926 253:3 0 931.5G 0 mpath └─vdo_2926 253:5 0 4T 0 vdo /gluster_bricks/gv0 sdb 8:16 0 931.5G 0 disk └─WDC_WDS100T2B0B-00YS70_192490801828 253:4 0 931.5G 0 mpath sdc 8:32 0 477G 0 disk └─vdo_sdc 253:6 0 2.1T 0 vdo ├─gluster_vg_sdc-gluster_lv_engine 253:7 0 100G 0 lvm /gluster_bricks/engine ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tmeta 253:8 0 1G 0 lvm │ └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm │ ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm │ ├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data │ └─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc_tdata 253:9 0 2T 0 lvm └─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc-tpool 253:10 0 2T 0 lvm ├─gluster_vg_sdc-gluster_thinpool_gluster_vg_sdc 253:11 0 2T 1 lvm ├─gluster_vg_sdc-gluster_lv_data 253:12 0 1000G 0 lvm /gluster_bricks/data └─gluster_vg_sdc-gluster_lv_vmstore 253:13 0 1000G 0 lvm /gluster_bricks/vmstore sdd 8:48 1 58.8G 0 disk ├─sdd1 8:49 1 1G 0 part /boot └─sdd2 8:50 1 57.8G 0 part ├─cl-root 253:0 0 36.1G 0 lvm / ├─cl-swap 253:1 0 4G 0 lvm [SWAP] └─cl-home 253:2 0 17.6G 0 lvm /home [root@thor ~]# mount |grep engine /dev/mapper/gluster_vg_sdc-gluster_lv_engine on /gluster_bricks/engine type xfs (rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota,_netdev,x-systemd.requires=vdo.service) thorst.penguinpages.local:/engine on /media/engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev) thorst.penguinpages.local:/engine on /rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev,x-systemd.device-timeout=0) [root@thor ~]# [root@thor ~]# tail -50 /var/log/messages Jan 15 09:46:43 thor platform-python[28088]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker' Jan 15 09:46:43 thor abrt-server[28116]: Not saving repeating crash in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker' Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'. Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart. Jan 15 09:46:43 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 241. Jan 15 09:46:43 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:43 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:45 thor systemd[1]: Started Session c448 of user root. Jan 15 09:46:45 thor systemd[1]: session-c448.scope: Succeeded. Jan 15 09:46:45 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Service RestartSec=10s expired, scheduling restart. Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Scheduled restart job, restart counter is at 211. Jan 15 09:46:48 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Monitoring Agent. Jan 15 09:46:48 thor systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Failed initializing the broker: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata' Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 64, in run#012 self._storage_broker_instance = self._get_storage_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 143, in _get_storage_broker#012 return storage_broker.StorageBroker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 97, in __init__#012 self._backend.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 408, in connect#012 self._check_symlinks(self._storage_path, volume.path, service_link)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 105, in _check_symlinks#012 os.unlink(service_link)#012OSError: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata' Jan 15 09:46:48 thor journal[28118]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Trying to restart the broker Jan 15 09:46:48 thor platform-python[28118]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker' Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'. Jan 15 09:46:48 thor abrt-server[28144]: Deleting problem directory Python3-2021-01-15-09:46:48-28118 (dup of Python3-2020-09-18-14:25:13-1363) Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart. Jan 15 09:46:48 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 242. Jan 15 09:46:48 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:48 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor#012 response = self._proxy.start_monitor(type, options)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__#012 return self.__send(self.__name, args)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request#012 verbose=self.__verbose#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request#012 return self.single_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request#012 http_conn = self.send_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request#012 self.send_content(connection, request_body)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content#012 connection.endheaders(request_body)#012 File "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders#012 self._send_output(message_body, encode_chunked=encode_chunked)#012 File "/usr/lib64/python3.6/http/client.py", line 1040, in _send_output#012 self.send(msg)#012 File "/usr/lib64/python3.6/http/client.py", line 978, in send#012 self.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect#012 self.sock.connect(base64.b16decode(self.host))#012FileNotFoundError: [Errno 2] No such file or directory#012#012During handling of the above exception, another exception occurred:#012#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012 return action(he)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012 return he.start_monitoring()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 437, in start_monitoring#012 self._initialize_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 561, in _initialize_broker#012 m.get('options', {}))#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 91, in start_monitor#012 ).format(t=type, o=options, e=e)#012ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'addr': '172.16.100.1', 'network_test': 'dns', 'tcp_t_address': '', 'tcp_t_port': ''}] Jan 15 09:46:48 thor journal[28140]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent Jan 15 09:46:48 thor abrt-server[28144]: /bin/sh: reporter-systemd-journal: command not found Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Main process exited, code=exited, status=157/n/a Jan 15 09:46:48 thor systemd[1]: ovirt-ha-agent.service: Failed with result 'exit-code'. Jan 15 09:46:49 thor vdsm[8421]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished? Jan 15 09:46:50 thor systemd[1]: Started Session c449 of user root. Jan 15 09:46:50 thor systemd[1]: session-c449.scope: Succeeded. Jan 15 09:46:50 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Failed initializing the broker: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata' Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 64, in run#012 self._storage_broker_instance = self._get_storage_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/broker.py", line 143, in _get_storage_broker#012 return storage_broker.StorageBroker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 97, in __init__#012 self._backend.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 408, in connect#012 self._check_symlinks(self._storage_path, volume.path, service_link)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/storage_backends.py", line 105, in _check_symlinks#012 os.unlink(service_link)#012OSError: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/thorst.penguinpages.local:_engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/ha_agent/hosted-engine.metadata' Jan 15 09:46:53 thor journal[28165]: ovirt-ha-broker ovirt_hosted_engine_ha.broker.broker.Broker ERROR Trying to restart the broker Jan 15 09:46:53 thor platform-python[28165]: detected unhandled Python exception in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker' Jan 15 09:46:53 thor abrt-server[28199]: Not saving repeating crash in '/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker' Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Main process exited, code=exited, status=1/FAILURE Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Failed with result 'exit-code'. Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Service RestartSec=100ms expired, scheduling restart. Jan 15 09:46:53 thor systemd[1]: ovirt-ha-broker.service: Scheduled restart job, restart counter is at 243. Jan 15 09:46:53 thor systemd[1]: Stopped oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:53 thor systemd[1]: Started oVirt Hosted Engine High Availability Communications Broker. Jan 15 09:46:55 thor systemd[1]: Started Session c450 of user root. Jan 15 09:46:55 thor systemd[1]: session-c450.scope: Succeeded. Jan 15 09:46:55 thor upsmon[2232]: Poll UPS [nutmonitor@localhost] failed - [nutmonitor] does not exist on server localhost Questions: 1) I have two important VMs that have snapshots that I need to boot up. Is their a means with an HCI configuration to manually start the VMs without oVirt engine being up? 2) Is their a means to debug what is going on with the engine failing to start to repair (I hate reloading as the only fix for systems) 3) Is their a means to re-deploy HCI setup wizard, but use the "engine" volume and so retain the VMs and templates?

Update: [root@thor ~]# dnf update --allowerasing Last metadata expiration check: 0:00:09 ago on Fri 15 Jan 2021 10:02:05 AM EST. Dependencies resolved. ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Upgrading: cockpit-bridge x86_64 234-1.el8 baseos 597 k cockpit-system noarch 234-1.el8 baseos 3.1 M replacing cockpit-dashboard.noarch 217-1.el8 Removing dependent packages: cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M Transaction Summary ========================================================================================================================================================================================================================================= Upgrade 2 Packages Remove 3 Packages Total download size: 3.7 M Is this ok [y/N]: y Downloading Packages: (1/2): cockpit-bridge-234-1.el8.x86_64.rpm 160 kB/s | 597 kB 00:03 (2/2): cockpit-system-234-1.el8.noarch.rpm 746 kB/s | 3.1 MB 00:04 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 499 kB/s | 3.7 MB 00:07 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Upgrading : cockpit-bridge-234-1.el8.x86_64 1/8 Upgrading : cockpit-system-234-1.el8.noarch 2/8 Erasing : ovirt-host-4.4.1-4.el8.x86_64 3/8 Obsoleting : cockpit-dashboard-217-1.el8.noarch 4/8 Cleanup : cockpit-system-217-1.el8.noarch 5/8 Erasing : cockpit-ovirt-dashboard-0.14.17-1.el8.noarch 6/8 Erasing : ovirt-hosted-engine-setup-2.4.9-1.el8.noarch 7/8 Cleanup : cockpit-bridge-217-1.el8.x86_64 8/8 Running scriptlet: cockpit-bridge-217-1.el8.x86_64 8/8 Verifying : cockpit-bridge-234-1.el8.x86_64 1/8 Verifying : cockpit-bridge-217-1.el8.x86_64 2/8 Verifying : cockpit-system-234-1.el8.noarch 3/8 Verifying : cockpit-system-217-1.el8.noarch 4/8 Verifying : cockpit-dashboard-217-1.el8.noarch 5/8 Verifying : cockpit-ovirt-dashboard-0.14.17-1.el8.noarch 6/8 Verifying : ovirt-host-4.4.1-4.el8.x86_64 7/8 Verifying : ovirt-hosted-engine-setup-2.4.9-1.el8.noarch 8/8 Installed products updated. Upgraded: cockpit-bridge-234-1.el8.x86_64 cockpit-system-234-1.el8.noarch Removed: cockpit-ovirt-dashboard-0.14.17-1.el8.noarch ovirt-host-4.4.1-4.el8.x86_64 ovirt-hosted-engine-setup-2.4.9-1.el8.noarch Complete! [root@thor ~]# reboot ############## This cleaned up... I think.. package issues.. but engine still fubar [root@thor ~]# rpm -qa |grep ovirt python3-ovirt-setup-lib-1.3.2-1.el8.noarch ovirt-provider-ovn-driver-1.2.33-1.el8.noarch ovirt-vmconsole-host-1.0.9-1.el8.noarch ovirt-imageio-common-2.1.1-1.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-imageio-client-2.1.1-1.el8.x86_64 python3-ovirt-engine-sdk4-4.4.8-1.el8.x86_64 ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch ovirt-imageio-daemon-2.1.1-1.el8.x86_64 ovirt-release44-4.4.4-1.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-ansible-collection-1.2.4-1.el8.noarch ovirt-hosted-engine-ha-2.4.5-1.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch ovirt-host-dependencies-4.4.1-4.el8.x86_64 ovirt-vmconsole-1.0.9-1.el8.noarch [root@thor ~]# dnf update Last metadata expiration check: 0:06:49 ago on Fri 15 Jan 2021 10:02:05 AM EST. Dependencies resolved. Nothing to do. Complete! [root@thor ~]#

ouch, re: starting vms without ovirt, you can get access to vms locally using guestfish, i've used that before to fix vms after server room aircon failure. (it's not really a way to run the vms, but more for access when you can run the vm) export LIBGUESTFS_BACKEND=direct should be able to copy out if you need the files before you manage to solve your deps issues. you can also use this method to inspect log files on the engine vm if needed... Kind Regards, Mike On 15/01/2021 15:09, penguin pages wrote:
Update:
[root@thor ~]# dnf update --allowerasing Last metadata expiration check: 0:00:09 ago on Fri 15 Jan 2021 10:02:05 AM EST. Dependencies resolved. ========================================================================================================================================================================================================================================= Package Architecture Version Repository Size ========================================================================================================================================================================================================================================= Upgrading: cockpit-bridge x86_64 234-1.el8 baseos 597 k cockpit-system noarch 234-1.el8 baseos 3.1 M replacing cockpit-dashboard.noarch 217-1.el8 Removing dependent packages: cockpit-ovirt-dashboard noarch 0.14.17-1.el8 @ovirt-4.4 16 M ovirt-host x86_64 4.4.1-4.el8 @ovirt-4.4 11 k ovirt-hosted-engine-setup noarch 2.4.9-1.el8 @ovirt-4.4 1.3 M
Transaction Summary ========================================================================================================================================================================================================================================= Upgrade 2 Packages Remove 3 Packages
Total download size: 3.7 M Is this ok [y/N]: y Downloading Packages: (1/2): cockpit-bridge-234-1.el8.x86_64.rpm 160 kB/s | 597 kB 00:03 (2/2): cockpit-system-234-1.el8.noarch.rpm 746 kB/s | 3.1 MB 00:04 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 499 kB/s | 3.7 MB 00:07 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Upgrading : cockpit-bridge-234-1.el8.x86_64 1/8 Upgrading : cockpit-system-234-1.el8.noarch 2/8 Erasing : ovirt-host-4.4.1-4.el8.x86_64 3/8 Obsoleting : cockpit-dashboard-217-1.el8.noarch 4/8 Cleanup : cockpit-system-217-1.el8.noarch 5/8 Erasing : cockpit-ovirt-dashboard-0.14.17-1.el8.noarch 6/8 Erasing : ovirt-hosted-engine-setup-2.4.9-1.el8.noarch 7/8 Cleanup : cockpit-bridge-217-1.el8.x86_64 8/8 Running scriptlet: cockpit-bridge-217-1.el8.x86_64 8/8 Verifying : cockpit-bridge-234-1.el8.x86_64 1/8 Verifying : cockpit-bridge-217-1.el8.x86_64 2/8 Verifying : cockpit-system-234-1.el8.noarch 3/8 Verifying : cockpit-system-217-1.el8.noarch 4/8 Verifying : cockpit-dashboard-217-1.el8.noarch 5/8 Verifying : cockpit-ovirt-dashboard-0.14.17-1.el8.noarch 6/8 Verifying : ovirt-host-4.4.1-4.el8.x86_64 7/8 Verifying : ovirt-hosted-engine-setup-2.4.9-1.el8.noarch 8/8 Installed products updated.
Upgraded: cockpit-bridge-234-1.el8.x86_64 cockpit-system-234-1.el8.noarch
Removed: cockpit-ovirt-dashboard-0.14.17-1.el8.noarch ovirt-host-4.4.1-4.el8.x86_64 ovirt-hosted-engine-setup-2.4.9-1.el8.noarch
Complete! [root@thor ~]# reboot ##############
This cleaned up... I think.. package issues.. but engine still fubar
[root@thor ~]# rpm -qa |grep ovirt python3-ovirt-setup-lib-1.3.2-1.el8.noarch ovirt-provider-ovn-driver-1.2.33-1.el8.noarch ovirt-vmconsole-host-1.0.9-1.el8.noarch ovirt-imageio-common-2.1.1-1.el8.x86_64 ovirt-openvswitch-2.11-0.2020061801.el8.noarch ovirt-imageio-client-2.1.1-1.el8.x86_64 python3-ovirt-engine-sdk4-4.4.8-1.el8.x86_64 ovirt-python-openvswitch-2.11-0.2020061801.el8.noarch ovirt-imageio-daemon-2.1.1-1.el8.x86_64 ovirt-release44-4.4.4-1.el8.noarch ovirt-openvswitch-ovn-2.11-0.2020061801.el8.noarch ovirt-openvswitch-ovn-common-2.11-0.2020061801.el8.noarch ovirt-ansible-collection-1.2.4-1.el8.noarch ovirt-hosted-engine-ha-2.4.5-1.el8.noarch ovirt-openvswitch-ovn-host-2.11-0.2020061801.el8.noarch ovirt-host-dependencies-4.4.1-4.el8.x86_64 ovirt-vmconsole-1.0.9-1.el8.noarch [root@thor ~]# dnf update Last metadata expiration check: 0:06:49 ago on Fri 15 Jan 2021 10:02:05 AM EST. Dependencies resolved. Nothing to do. Complete! [root@thor ~]# _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ESROYCXLX6ACWE...

Thanks for reply.. seems guestfish is a tool I need to do some RTFM on. This would I think allow me to read in disk from then "engine" storage. and manipulate files within it. But is their a way to just start the VMs? I guess I jumped from "old school virsh" to relying upon oVirt GUI... and need to buff up on tools that allow me to debug when engine is down. [root@thor ~]# virsh list --all Please enter your authentication name: admin Please enter your password: error: failed to connect to the hypervisor error: authentication failed: authentication failed <<< and I only ever use ONE password for all systems / accounts.... but I think virsh has been depricated... so maybe this is why> I am currently poking around with [root@thor ~]# virt- virt-admin virt-clone virt-diff virt-host-validate virt-ls virt-resize virt-tar-in virt-xml virt-alignment-scan virt-copy-in virt-edit virt-index-validate virt-make-fs virt-sanlock-cleanup virt-tar-out virt-xml-validate virt-builder virt-copy-out virt-filesystems virt-inspector virt-pki-validate virt-sparsify virt-v2v virt-builder-repository virt-customize virt-format virt-install virt-qemu-run virt-sysprep virt-v2v-copy-to-local virt-cat virt-df virt-get-kernel virt-log virt-rescue virt-tail virt-what [root@thor ~]# Does anyone have example of: 1) List VMs 2) start VM named "foo"

Seems fresh cup cofee is helping Post streams update fubar issues # Fix dependancy issues yum update --allowerasing # List VMs read only and bypass password issue.. uh.. bueller... I know it has VMs.. [root@odin ~]# virsh --readonly list Id Name State -------------------- # Set password so virsh with admin account works [root@odin ~]# saslpasswd2 -a libvirt admin Password: Again (for verification): [root@odin ~]# virsh list --all Please enter your authentication name: admin Please enter your password: Id Name State ------------------------------------ - HostedEngineLocal shut off [root@odin ~]# virsh start HostedEngineLocal Please enter your authentication name: admin Please enter your password: error: Failed to start domain HostedEngineLocal error: Requested operation is not valid: network 'default' is not active [root@odin ~]# .... Now looking into OVS side. But game for other suggestions as this seems like a bit of a hack to get it working

[root@medusa ~]# virsh net-list Please enter your authentication name: admin Please enter your password: Name State Autostart Persistent ------------------------------------------------ ;vdsmdummy; active no no # Hmm.. so not sure with oVirt this is expected.. but the defined networks I use are still present.. [root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/ 101_Storage 102_DMZ ovirtmgmt Storage # The one that the ovirt engine is bound to is the default "ovirtmgmt" named one [root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt { "netmask": "255.255.255.0", "ipv6autoconf": false, "nic": "enp0s29u1u4", "bridged": true, "ipaddr": "172.16.100.103", "defaultRoute": true, "dhcpv6": false, "gateway": "172.16.100.1", "mtu": 1500, "switch": "legacy", "stp": false, "bootproto": "none", "nameservers": [ "172.16.100.40", "8.8.8.8" ] } [root@medusa ~]# # Looks fine to me... [root@medusa ~]# virsh net-start ovirtmgmt Please enter your authentication name: admin Please enter your password: error: failed to get network 'ovirtmgmt' error: Network not found: no network with matching name 'ovirtmgmt' [root@medusa ~]# ... back to googling...

Maybe this is the rathole cause? [root@medusa system]# systemctl status vdsmd.service ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-01-15 10:53:56 EST; 5s ago Process: 32306 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS) Main PID: 32364 (vdsmd) Tasks: 77 (limit: 410161) Memory: 77.1M CGroup: /system.slice/vdsmd.service ├─32364 /usr/bin/python3 /usr/share/vdsm/vdsmd ├─32480 /usr/libexec/ioprocess --read-pipe-fd 44 --write-pipe-fd 43 --max-threads 10 --max-queued-requests 10 ├─32488 /usr/libexec/ioprocess --read-pipe-fd 50 --write-pipe-fd 49 --max-threads 10 --max-queued-requests 10 ├─32494 /usr/libexec/ioprocess --read-pipe-fd 55 --write-pipe-fd 54 --max-threads 10 --max-queued-requests 10 ├─32501 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 60 --max-threads 10 --max-queued-requests 10 └─32514 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 61 --max-threads 10 --max-queued-requests 10 Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: Running nwfilter Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: Running dummybr Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: Running tune_system Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: Running test_space Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: Running test_lo Jan 15 10:53:56 medusa.penguinpages.local systemd[1]: Started Virtual Desktop Server Manager. Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available. Error: [Errno 111] Connection refused Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available, KSM stats will be missing. Error: Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished? Jan 15 10:53:59 medusa.penguinpages.local vdsm[32364]: WARN Not ready yet, ignoring event '|virt|VM_status|69ab4f82-1a53-42c8-afca-210a3a2715f1' args={'69ab4f82-1a53-42c8-afca-210a3a2715f1': {'status': 'Down', 'vmId': '69ab4f82-1a53> [root@medusa system]# I googled around and the hits talk about re-running engine.. is their some kind of flow diagram of how to get oVirt back on its feet if it dies like this? I feel like I am poking in the dark here.

So only two things that jump out are just 1) ovirt-ha-agent not starting... back to python sillyness that I have no idea on debug [root@medusa ~]# systemctl status ovirt-ha-agent.service ● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled) Active: activating (auto-restart) (Result: exit-code) since Fri 2021-01-15 11:54:52 EST; 6s ago Process: 16116 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent (code=exited, status=157) Main PID: 16116 (code=exited, status=157) [root@medusa ~]# tail /var/log/messages Jan 15 11:55:02 medusa systemd[1]: Started oVirt Hosted Engine High Availability Monitoring Agent. Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start necessary monitors Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 85, in start_monitor#012 response = self._proxy.start_monitor(type, options)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in __call__#012 return self.__send(self.__name, args)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request#012 verbose=self.__verbose#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1154, in request#012 return self.single_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in single_request#012 http_conn = self.send_request(host, handler, request_body, verbose)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1279, in send_request#012 self.send_content(connection, request_body)#012 File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content#012 connection.endheaders(request_body)#012 File "/usr/lib64/python3.6/http/client.py", line 1264, in endheaders#012 self._send_output(message_body, encode_chunked=encode_chunked)#012 File "/usr/lib64/python3.6/http/client.py", line 1040, in _send_output#012 self.send(msg)#012 File "/usr/lib64/python3.6/http/client.py", line 978, in send#012 self.connect()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 74, in connect#012 self.sock.connect(base64.b16decode(self.host))#012FileNotFoundError: [Errno 2] No such file or directory#012#012During handling of the above exception, another exception occurred:#012#012Traceback (most recent call last):#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent#012 return action(he)#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper#012 return he.start_monitoring()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 437, in start_monitoring#012 self._initialize_broker()#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 561, in _initialize_broker#012 m.get('options', {}))#012 File "/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 91, in start_monitor#012 ).format(t=type, o=options, e=e)#012ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'addr': '172.16.100.1', 'network_test': 'dns', 'tcp_t_address': '', 'tcp_t_port': ''}] Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Main process exited, code=exited, status=157/n/a Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Failed with result 'exit-code'. Jan 15 11:55:05 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] failed - [nutmonitor] does not exist on server 172.16.100.102 Jan 15 11:55:06 medusa vdsm[14589]: WARN unhandled write event Jan 15 11:55:08 medusa vdsm[14589]: WARN unhandled close event Jan 15 11:55:10 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] failed - [nutmonitor] does not exist on server 172.16.100.102 2) Notes about vdsmd host engine "setup not finished"... but this may be issue of ha-agent as source [root@medusa ~]# systemctl status vdsmd.service ● vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2021-01-15 11:49:27 EST; 6min ago Main PID: 14589 (vdsmd) Tasks: 72 (limit: 410161) Memory: 77.8M CGroup: /system.slice/vdsmd.service ├─14589 /usr/bin/python3 /usr/share/vdsm/vdsmd ├─14686 /usr/libexec/ioprocess --read-pipe-fd 43 --write-pipe-fd 42 --max-threads 10 --max-queued-requests 10 ├─14691 /usr/libexec/ioprocess --read-pipe-fd 46 --write-pipe-fd 43 --max-threads 10 --max-queued-requests 10 ├─14698 /usr/libexec/ioprocess --read-pipe-fd 51 --write-pipe-fd 50 --max-threads 10 --max-queued-requests 10 ├─14705 /usr/libexec/ioprocess --read-pipe-fd 57 --write-pipe-fd 56 --max-threads 10 --max-queued-requests 10 └─14717 /usr/libexec/ioprocess --read-pipe-fd 64 --write-pipe-fd 63 --max-threads 10 --max-queued-requests 10 Jan 15 11:55:43 medusa.penguinpages.local vdsm[14589]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished? Jan 15 11:55:45 medusa.penguinpages.local vdsm[14589]: WARN unhandled close event Jan 15 11:55:55 medusa.penguinpages.local vdsm[14589]: WARN unhandled write event Jan 15 11:55:57 medusa.penguinpages.local vdsm[14589]: WARN unhandled close event Jan 15 11:55:58 medusa.penguinpages.local vdsm[14589]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished? Jan 15 11:56:07 medusa.penguinpages.local vdsm[14589]: WARN unhandled write event Jan 15 11:56:09 medusa.penguinpages.local vdsm[14589]: WARN unhandled close event Jan 15 11:56:14 medusa.penguinpages.local vdsm[14589]: WARN Failed to retrieve Hosted Engine HA info, is Hosted Engine setup finished? Jan 15 11:56:19 medusa.penguinpages.local vdsm[14589]: WARN unhandled write event Jan 15 11:56:21 medusa.penguinpages.local vdsm[14589]: WARN unhandled close event [root@medusa ~]# Is their a command line means to tell the engine to start.. or to start a virtual machine directly?

I found this document which was useful to explain some details on how to debug and roles. https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf But still stuck with engine not starting.

Questions: 1) I have two important VMs that have snapshots that I need to boot up. Is their a means with an HCI configuration to manually start the VMs without oVirt engine being up?
2) Is their a means to debug what is going on with the engine failing to start to repair (I hate reloading as the only fix for systems) You can use "hosted-engine" to start the HostedEngine VM in paused mode . Then you can connect over spice/vnc and then unpause the VM. Booting
What it worked for me was: 1) Start a VM via "virsh" define a virsh alias: alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted- engine/virsh_auth.conf' Check the host's vdsm.log ,where the VM was last started - you will find the VM's xml inside . Copy the whole xml and use virsh to define the VM "virsh define myVM.xml && virsh start myVM" 2) vdsm-client most probably can start VMs even when the engine is down the HostedEngine VM from DVD is a little bit harder. You will need to get the HE's xml and edit it to point to the DVD. Once you got the altered HE config , you can define and start.
3) Is their a means to re-deploy HCI setup wizard, but use the "engine" volume and so retain the VMs and templates? You are not expected to mix HostedEngine and other VMs on the same storage domain (gluster volume).
Best Regards, Strahil Nikolov
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPURBXOULJ7NPF...

On Fri, Jan 15, 2021, 20:20 Strahil Nikolov via Users <users@ovirt.org> wrote:
Questions: 1) I have two important VMs that have snapshots that I need to boot up. Is their a means with an HCI configuration to manually start the VMs without oVirt engine being up?
What it worked for me was: 1) Start a VM via "virsh" define a virsh alias: alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted- engine/virsh_auth.conf' Check the host's vdsm.log ,where the VM was last started - you will find the VM's xml inside . Copy the whole xml and use virsh to define the VM "virsh define myVM.xml && virsh start myVM"
If you cannot find the xml file of the vm then you van use virt install as if you were running in plain KVM.
2) Is their a means to debug what is going on with the engine failing to start to repair (I hate reloading as the only fix for systems) You can use "hosted-engine" to start the HostedEngine VM in paused mode . Then you can connect over spice/vnc and then unpause the VM. Booting
2) vdsm-client most probably can start VMs even when the engine is down the HostedEngine VM from DVD is a little bit harder. You will need to get the HE's xml and edit it to point to the DVD. Once you got the altered HE config , you can define and start.
3) Is their a means to re-deploy HCI setup wizard, but use the "engine" volume and so retain the VMs and templates? You are not expected to mix HostedEngine and other VMs on the same storage domain (gluster volume).
Best Regards, Strahil Nikolov
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPURBXOULJ7NPF... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5ZKGRY63OZSEI...

Thanks for replies. Here is where it is at: # Two nodes think no VMs exist [root@odin ~]# vdsm-client Host getVMList [] #One showing one VM but down [root@medusa ~]# vdsm-client Host getVMList [ { "status": "Down", "statusTime": "2153886148", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# vdsm-client Host getAllVmStats [ { "exitCode": 1, "exitMessage": "VM terminated with error", "exitReason": 1, "status": "Down", "statusTime": "2153916276", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# vdsm-client VM cont vmID="69ab4f82-1a53-42c8-afca-210a3a2715f1" vdsm-client: Command VM.cont with args {'vmID': '69ab4f82-1a53-42c8-afca-210a3a2715f1'} failed: (code=16, message=Unexpected exception) # Assuming that ID represents the hosted-engine I tried to start it [root@medusa ~]# hosted-engine --vm-start The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable. # Back to ovirt-ha-agent being fubar and stoping things. I have about 8 or so VMs on the cluster. Two are my IDM nodes which has DNS and other core services.. which is what I am really trying to get up .. even if manual until I figure out oVirt issue. I think you are correct. "engine" volume is for just the engine. Data is where the other VMs are [root@medusa images]# tree . ├── 335c6b1a-d8a5-4664-9a9c-39744d511af8 │ ├── 579323ad-bf7b-479b-b682-6e1e234a7908 │ ├── 579323ad-bf7b-479b-b682-6e1e234a7908.lease │ └── 579323ad-bf7b-479b-b682-6e1e234a7908.meta ├── d318cb8f-743a-461b-b246-75ffcde6bc5a │ ├── c16877d0-eb23-42ef-a06e-a3221ea915fc │ ├── c16877d0-eb23-42ef-a06e-a3221ea915fc.lease │ └── c16877d0-eb23-42ef-a06e-a3221ea915fc.meta └── junk ├── 296163f2-846d-4a2c-9a4e-83a58640b907 │ ├── 376b895f-e0f2-4387-b038-fbef4705fbcc │ ├── 376b895f-e0f2-4387-b038-fbef4705fbcc.lease │ └── 376b895f-e0f2-4387-b038-fbef4705fbcc.meta ├── 45a478d7-4c1b-43e8-b106-7acc75f066fa │ ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20 │ ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20.lease │ └── b5249e6c-0ba6-4302-8e53-b74d2b919d20.meta ├── d8b708c1-5762-4215-ae1f-0e57444c99ad │ ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9 │ ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.lease │ └── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.meta └── eaf12f3c-301f-4b61-b5a1-0c6d0b0a7f7b ├── fbf3bf59-a23a-4c6f-b66e-71369053b406 ├── fbf3bf59-a23a-4c6f-b66e-71369053b406.lease └── fbf3bf59-a23a-4c6f-b66e-71369053b406.meta 7 directories, 18 files [root@medusa images]# cd /media/engine/ [root@medusa engine]# ls 3afc47ba-afb9-413f-8de5-8d9a2f45ecde [root@medusa engine]# tree . └── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde ├── dom_md │ ├── ids │ ├── inbox │ ├── leases │ ├── metadata │ ├── outbox │ └── xleases ├── ha_agent ├── images │ ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480 │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease │ │ └── e4e26573-09a5-43fa-91ec-37d12de46480.meta │ ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6 │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease │ │ └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta │ ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease │ │ └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta │ ├── 685309b1-1ae9-45f3-90c3-d719a594482d │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0 │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease │ │ └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta │ ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease │ │ └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta │ └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease │ └── 38d552c5-689d-47b7-9eea-adb308da8027.meta └── master ├── tasks │ ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba │ │ └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0 │ ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp │ ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp │ ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup │ │ └── 2e0e347c-fd01-404f-9459-ef175c82c354.task │ ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup │ │ └── 43f17022-e003-4e9f-81ec-4a01582223bd.task │ ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp │ ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c │ │ └── 6826c8f5-b9df-498e-a576-af0c4e7fe69c.task │ ├── 78ed90b0-2a87-4c48-8204-03d4b0bd7694 │ │ └── 78ed90b0-2a87-4c48-8204-03d4b0bd7694.job.0 │ ├── 7c7799a5-d28e-4b42-86ee-84bb8822e82f.temp │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893 │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893.temp │ ├── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.backup │ │ └── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.task │ ├── bcee8725-efde-4848-a108-01c262625aaa │ │ └── bcee8725-efde-4848-a108-01c262625aaa.job.0 │ ├── c0b5a032-c4a9-4648-b348-c2a5cf4d6cad.temp │ ├── ce7e2ebf-2c28-435d-b359-14d0da2e9011 │ └── ce7e2ebf-2c28-435d-b359-14d0da2e9011.temp └── vms 29 directories, 31 files # Finding the XML file for hosted-engine VM root@medusa /]# cd /gluster_bricks/vmstore/vmstore/qemu/ [root@medusa qemu]# ls ns01.xml ns02.xml [root@medusa qemu]# ls -alh total 36K drwxr-xr-x. 2 root root 38 Sep 17 10:19 . drwxr-xr-x. 8 vdsm kvm 8.0K Jan 15 11:26 .. -rw-------. 2 qemu qemu 4.7K Sep 17 07:19 ns01.xml -rw-------. 2 root root 4.7K Sep 17 10:19 ns02.xml [root@medusa qemu]# cat ns01.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit ns01 or other application using the libvirt API. --> <domain type='kvm'> <name>ns01</name> <uuid>0bfd4ad4-b405-4154-94da-ac3261ebc17e</uuid> <title>ns01</title> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>4184064</currentMemory> <vcpu placement='static'>4</vcpu> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <vmport state='off'/> </features> <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/media/vmstore/ns01.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/media/vmstore/ns01_var.qcow2'/> <target dev='vdb' bus='virtio'/> <snip> # Try to start VM via use of this XML file [root@medusa qemu]# cp /gluster_bricks/vmstore/vmstore/qemu/ns01.xml /tmp [root@medusa qemu]# virsh define /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: Domain ns01 defined from /tmp/ns01.xml [root@medusa qemu]# virsh start /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: error: failed to get domain '/tmp/ns01.xml' [root@medusa qemu]# Hmm... working on it.... but if if you have suggestion / example .. that would help

On Fri, Jan 15, 2021, 22:04 penguin pages <jeremey.wise@gmail.com> wrote:
Thanks for replies.
Here is where it is at:
# Two nodes think no VMs exist [root@odin ~]# vdsm-client Host getVMList []
#One showing one VM but down [root@medusa ~]# vdsm-client Host getVMList [ { "status": "Down", "statusTime": "2153886148", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# vdsm-client Host getAllVmStats [ { "exitCode": 1, "exitMessage": "VM terminated with error", "exitReason": 1, "status": "Down", "statusTime": "2153916276", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# vdsm-client VM cont vmID="69ab4f82-1a53-42c8-afca-210a3a2715f1" vdsm-client: Command VM.cont with args {'vmID': '69ab4f82-1a53-42c8-afca-210a3a2715f1'} failed: (code=16, message=Unexpected exception)
# Assuming that ID represents the hosted-engine I tried to start it [root@medusa ~]# hosted-engine --vm-start The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
# Back to ovirt-ha-agent being fubar and stoping things.
I have about 8 or so VMs on the cluster. Two are my IDM nodes which has DNS and other core services.. which is what I am really trying to get up .. even if manual until I figure out oVirt issue. I think you are correct. "engine" volume is for just the engine. Data is where the other VMs are
[root@medusa images]# tree . ├── 335c6b1a-d8a5-4664-9a9c-39744d511af8 │ ├── 579323ad-bf7b-479b-b682-6e1e234a7908 │ ├── 579323ad-bf7b-479b-b682-6e1e234a7908.lease │ └── 579323ad-bf7b-479b-b682-6e1e234a7908.meta ├── d318cb8f-743a-461b-b246-75ffcde6bc5a │ ├── c16877d0-eb23-42ef-a06e-a3221ea915fc │ ├── c16877d0-eb23-42ef-a06e-a3221ea915fc.lease │ └── c16877d0-eb23-42ef-a06e-a3221ea915fc.meta └── junk ├── 296163f2-846d-4a2c-9a4e-83a58640b907 │ ├── 376b895f-e0f2-4387-b038-fbef4705fbcc │ ├── 376b895f-e0f2-4387-b038-fbef4705fbcc.lease │ └── 376b895f-e0f2-4387-b038-fbef4705fbcc.meta ├── 45a478d7-4c1b-43e8-b106-7acc75f066fa │ ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20 │ ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20.lease │ └── b5249e6c-0ba6-4302-8e53-b74d2b919d20.meta ├── d8b708c1-5762-4215-ae1f-0e57444c99ad │ ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9 │ ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.lease │ └── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.meta └── eaf12f3c-301f-4b61-b5a1-0c6d0b0a7f7b ├── fbf3bf59-a23a-4c6f-b66e-71369053b406 ├── fbf3bf59-a23a-4c6f-b66e-71369053b406.lease └── fbf3bf59-a23a-4c6f-b66e-71369053b406.meta
7 directories, 18 files [root@medusa images]# cd /media/engine/ [root@medusa engine]# ls 3afc47ba-afb9-413f-8de5-8d9a2f45ecde [root@medusa engine]# tree . └── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde ├── dom_md │ ├── ids │ ├── inbox │ ├── leases │ ├── metadata │ ├── outbox │ └── xleases ├── ha_agent ├── images │ ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480 │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease │ │ └── e4e26573-09a5-43fa-91ec-37d12de46480.meta │ ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6 │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease │ │ └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta │ ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease │ │ └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta │ ├── 685309b1-1ae9-45f3-90c3-d719a594482d │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0 │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease │ │ └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta │ ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease │ │ └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta │ └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease │ └── 38d552c5-689d-47b7-9eea-adb308da8027.meta └── master ├── tasks │ ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba │ │ └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0 │ ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp │ ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp │ ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup │ │ └── 2e0e347c-fd01-404f-9459-ef175c82c354.task │ ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup │ │ └── 43f17022-e003-4e9f-81ec-4a01582223bd.task │ ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp │ ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c │ │ └── 6826c8f5-b9df-498e-a576-af0c4e7fe69c.task │ ├── 78ed90b0-2a87-4c48-8204-03d4b0bd7694 │ │ └── 78ed90b0-2a87-4c48-8204-03d4b0bd7694.job.0 │ ├── 7c7799a5-d28e-4b42-86ee-84bb8822e82f.temp │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893 │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893.temp │ ├── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.backup │ │ └── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.task │ ├── bcee8725-efde-4848-a108-01c262625aaa │ │ └── bcee8725-efde-4848-a108-01c262625aaa.job.0 │ ├── c0b5a032-c4a9-4648-b348-c2a5cf4d6cad.temp │ ├── ce7e2ebf-2c28-435d-b359-14d0da2e9011 │ └── ce7e2ebf-2c28-435d-b359-14d0da2e9011.temp └── vms
29 directories, 31 files
# Finding the XML file for hosted-engine VM root@medusa /]# cd /gluster_bricks/vmstore/vmstore/qemu/ [root@medusa qemu]# ls ns01.xml ns02.xml [root@medusa qemu]# ls -alh total 36K drwxr-xr-x. 2 root root 38 Sep 17 10:19 . drwxr-xr-x. 8 vdsm kvm 8.0K Jan 15 11:26 .. -rw-------. 2 qemu qemu 4.7K Sep 17 07:19 ns01.xml -rw-------. 2 root root 4.7K Sep 17 10:19 ns02.xml [root@medusa qemu]# cat ns01.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit ns01 or other application using the libvirt API. -->
<domain type='kvm'> <name>ns01</name> <uuid>0bfd4ad4-b405-4154-94da-ac3261ebc17e</uuid> <title>ns01</title> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>4184064</currentMemory> <vcpu placement='static'>4</vcpu> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <vmport state='off'/> </features> <cpu mode='host-model' check='partial'> <model fallback='allow'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/media/vmstore/ns01.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/media/vmstore/ns01_var.qcow2'/> <target dev='vdb' bus='virtio'/> <snip>
# Try to start VM via use of this XML file [root@medusa qemu]# cp /gluster_bricks/vmstore/vmstore/qemu/ns01.xml /tmp [root@medusa qemu]# virsh define /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: Domain ns01 defined from /tmp/ns01.xml
Seems that it was defined
[root@medusa qemu]# virsh start /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: error: failed to get domain '/tmp/ns01.xml'
You need to start the defined vm not the xml. Did you try to connect with: virsh -c qemu:///system?authfile=/etc/ovirt-hosted- engine/virsh_auth.conf Otherwise you can configure also libvirt to authenticate with a sasl account. Example: saslpasswd2 -a libvirt fred At /etc/libvirt/auth.conf: [credentials-test] authname=fred password=123456 [auth-libvirt-medusa] credentials=test Check https://libvirt.org/auth.html#ACL_server_username for details. [root@medusa qemu]#
Hmm... working on it.... but if if you have suggestion / example .. that would help _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XM2RKVE2OYNX7H...

Thanks for help.... following below. 1) Auth to Libvirtd and show VM "hosted engine" but also now that I manually registered "ns01" per above [root@medusa ~]# vdsm-client Host getVMList [ { "status": "Down", "statusTime": "2218288798", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list --all Id Name State ------------------------------------ - HostedEngine shut off - HostedEngineLocal shut off - ns01 shut off 2) Start VM .... but seems network is needed first virsh # start HostedEngine error: Failed to start domain HostedEngine error: Network not found: no network with matching name 'vdsm-ovirtmgmt' virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Requested operation is not valid: network 'default' is not active 3) Start Networks: This is "next next" HCI+Gluster build so it called it "ovirtmgmt" virsh # net-list Name State Autostart Persistent ------------------------------------------------ ;vdsmdummy; active no no virsh # net-autostart --network default Network default marked as autostarted virsh # net-start default Network default started virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Cannot access storage file '/var/tmp/localvmn4khg_ak/seed.iso': No such file or directory <<<<Hmmm... no idea where that is from or what that is about...>>> virsh # dumpxml HostedEngineLocal <domain type='kvm'> <name>HostedEngineLocal</name> <uuid>bb2006ce-838b-47a3-a049-7e3e5c7bb049</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://redhat.com/rhel/8.0"/> </libosinfo:libosinfo> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static'>4</vcpu> <os> <type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type> <boot dev='hd'/> <bootmenu enable='no'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-model' check='partial'/> <clock offset='utc'> <timer name='kvmclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/tmp/localvmn4khg_ak/seed.iso'/> <target dev='sda' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='none'/> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='network'> <mac address='00:16:3e:46:a6:60'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <video> <model type='vga' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='none'/> <rng model='virtio'> <backend model='random'>/dev/random</backend> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </rng> </devices> </domain> virsh # ##So not sure what hosted engine needs an ISO image. Can I remove this? virsh # change-media HostedEngineLocal /var/tmp/localvmn4khg_ak/seed.iso --eject Successfully ejected media. virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Cannot access storage file '/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645' (as uid:107, gid:107): No such file or directory [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree |grep e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645 [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# pwd /gluster_bricks/engine/engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree . ├── dom_md │ ├── ids │ ├── inbox │ ├── leases │ ├── metadata │ ├── outbox │ └── xleases ├── ha_agent │ ├── hosted-engine.lockspace -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c │ └── hosted-engine.metadata -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027 ├── images │ ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480 │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease │ │ └── e4e26573-09a5-43fa-91ec-37d12de46480.meta │ ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6 │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease │ │ └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta │ ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease │ │ └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta │ ├── 685309b1-1ae9-45f3-90c3-d719a594482d │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0 │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease │ │ └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta │ ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease │ │ └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta │ └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease │ └── 38d552c5-689d-47b7-9eea-adb308da8027.meta └── master ├── tasks │ ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba │ │ └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0 │ ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp │ ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp │ ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup │ │ └── 2e0e347c-fd01-404f-9459-ef175c82c354.task │ ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup │ │ └── 43f17022-e003-4e9f-81ec-4a01582223bd.task │ ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp │ ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c │ │ └── 6826c8f5-b9df-498e-a576-af0c4e7fe69c.task │ ├── 78ed90b0-2a87-4c48-8204-03d4b0bd7694 │ │ └── 78ed90b0-2a87-4c48-8204-03d4b0bd7694.job.0 │ ├── 7c7799a5-d28e-4b42-86ee-84bb8822e82f.temp │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893 │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893.temp │ ├── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.backup │ │ └── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.task │ ├── bcee8725-efde-4848-a108-01c262625aaa │ │ └── bcee8725-efde-4848-a108-01c262625aaa.job.0 │ ├── c0b5a032-c4a9-4648-b348-c2a5cf4d6cad.temp │ ├── ce7e2ebf-2c28-435d-b359-14d0da2e9011 │ └── ce7e2ebf-2c28-435d-b359-14d0da2e9011.temp └── vms ############# Ok.. I think I am just making things worse. Questions: 1) Why is this engine showing only on one of the three servers? 2) Why is their "HostedEngine" and "HostedEngineLocal"? 3) These paths for disk do not seem to align with engine file paths on volume. 4) If I redeploy HCI engine via cockpit will it be able to complete build .. I don't care if it wipes the "engine" so long as my data / VMs can be re-ingested. Thanks,

On Sat, Jan 16, 2021, 16:08 penguin pages <jeremey.wise@gmail.com> wrote:
Thanks for help.... following below.
1) Auth to Libvirtd and show VM "hosted engine" but also now that I manually registered "ns01" per above [root@medusa ~]# vdsm-client Host getVMList [ { "status": "Down", "statusTime": "2218288798", "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1" } ] [root@medusa ~]# virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands 'quit' to quit
virsh # list --all Id Name State ------------------------------------ - HostedEngine shut off - HostedEngineLocal shut off - ns01 shut off
2) Start VM .... but seems network is needed first virsh # start HostedEngine error: Failed to start domain HostedEngine error: Network not found: no network with matching name 'vdsm-ovirtmgmt'
virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Requested operation is not valid: network 'default' is not active
3) Start Networks: This is "next next" HCI+Gluster build so it called it "ovirtmgmt" virsh # net-list Name State Autostart Persistent ------------------------------------------------ ;vdsmdummy; active no no virsh # net-autostart --network default Network default marked as autostarted virsh # net-start default Network default started virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Cannot access storage file '/var/tmp/localvmn4khg_ak/seed.iso': No such file or directory
<<<<Hmmm... no idea where that is from or what that is about...>>>
virsh # dumpxml HostedEngineLocal <domain type='kvm'> <name>HostedEngineLocal</name> <uuid>bb2006ce-838b-47a3-a049-7e3e5c7bb049</uuid> <metadata> <libosinfo:libosinfo xmlns:libosinfo=" http://libosinfo.org/xmlns/libvirt/domain/1.0"> <libosinfo:os id="http://redhat.com/rhel/8.0"/> </libosinfo:libosinfo> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static'>4</vcpu> <os> <type arch='x86_64' machine='pc-q35-rhel8.2.0'>hvm</type> <boot dev='hd'/> <bootmenu enable='no'/> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-model' check='partial'/> <clock offset='utc'> <timer name='kvmclock' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/var/tmp/localvmn4khg_ak/seed.iso'/> <target dev='sda' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='none'/> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <interface type='network'> <mac address='00:16:3e:46:a6:60'/> <source network='default'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'> <listen type='address'/> </graphics> <video> <model type='vga' vram='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='none'/> <rng model='virtio'> <backend model='random'>/dev/random</backend> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </rng> </devices> </domain>
virsh #
##So not sure what hosted engine needs an ISO image. Can I remove this? virsh # change-media HostedEngineLocal /var/tmp/localvmn4khg_ak/seed.iso --eject Successfully ejected media.
virsh # start HostedEngineLocal error: Failed to start domain HostedEngineLocal error: Cannot access storage file '/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645' (as uid:107, gid:107): No such file or directory [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree |grep e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645 [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# pwd /gluster_bricks/engine/engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree . ├── dom_md │ ├── ids │ ├── inbox │ ├── leases │ ├── metadata │ ├── outbox │ └── xleases ├── ha_agent │ ├── hosted-engine.lockspace -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c │ └── hosted-engine.metadata -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027 ├── images │ ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480 │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease │ │ └── e4e26573-09a5-43fa-91ec-37d12de46480.meta │ ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6 │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease │ │ └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta │ ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease │ │ └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta │ ├── 685309b1-1ae9-45f3-90c3-d719a594482d │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0 │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease │ │ └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta │ ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease │ │ └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta │ └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027 │ ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease │ └── 38d552c5-689d-47b7-9eea-adb308da8027.meta └── master ├── tasks │ ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba │ │ └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0 │ ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp │ ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp │ ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup │ │ └── 2e0e347c-fd01-404f-9459-ef175c82c354.task │ ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup │ │ └── 43f17022-e003-4e9f-81ec-4a01582223bd.task │ ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp │ ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c │ │ └── 6826c8f5-b9df-498e-a576-af0c4e7fe69c.task │ ├── 78ed90b0-2a87-4c48-8204-03d4b0bd7694 │ │ └── 78ed90b0-2a87-4c48-8204-03d4b0bd7694.job.0 │ ├── 7c7799a5-d28e-4b42-86ee-84bb8822e82f.temp │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893 │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893.temp │ ├── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.backup │ │ └── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.task │ ├── bcee8725-efde-4848-a108-01c262625aaa │ │ └── bcee8725-efde-4848-a108-01c262625aaa.job.0 │ ├── c0b5a032-c4a9-4648-b348-c2a5cf4d6cad.temp │ ├── ce7e2ebf-2c28-435d-b359-14d0da2e9011 │ └── ce7e2ebf-2c28-435d-b359-14d0da2e9011.temp └── vms
#############
Ok.. I think I am just making things worse.
Questions: 1) Why is this engine showing only on one of the three servers? 2) Why is their "HostedEngine" and "HostedEngineLocal"?
HostedEngineLocal should not be listed in any of the hosts. It is a temp vm used during the initial deployment. 3) These paths for disk do not seem to align with engine file paths on
volume. 4) If I redeploy HCI engine via cockpit will it be able to complete build .. I don't care if it wipes the "engine" so long as my data / VMs can be re-ingested.
The nirmal restoration path for your scenario is to redeploy engine by wiping out the previous one and importing the engine configuration backup (hope you have one) What happened with the ns01 VM? Did you start it?
Thanks, _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OK2XASGXHR3CCO...

I do not know what happened to my other VMs. Two are important ns01 and ns02 which are my IDM cluster nodes with also plex and other utilities / services. Most of the rest are throw away VMs for testing / OCP /OKD I think I may have to redeploy... but concerns are 1) CentOS8 Streams has package conflicts with cockpit and ovirt https://bugzilla.redhat.com/show_bug.cgi?id=1917011 2) I do have a backup.. but hoping the deployment could redeploy and use existing PostGres DB... and so save rebuild. I think I have a backup.. but it is weeks old.. and.. so lots of things changed. (need to automate backups to my NAS .. on todo list now). I think I will try to redeploy and see how it goes... Thanks for help.. I am sure this drama fest is not over. More to come.

[root@medusa qemu]# virsh define /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: Domain ns01 defined from /tmp/ns01.xml
[root@medusa qemu]# virsh start /tmp/ns01.xml Please enter your authentication name: admin Please enter your password: error: failed to get domain '/tmp/ns01.xml'
[root@medusa qemu]#
When you define the file, you start the VM by name. After defining run 'virsh list'. Based on your xml you should use 'virsh start ns01'. Notice:As you can see my HostedEngine uses '/var/run/vdsm/....' instead of '/rhev/data-center/mnt/glusterSD/...' which is actually just a symbolic link. <disk type='file' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads' iothread='1'/> <source file='/var/run/vdsm/storage/808423f9-8a5c- 40cd-bc9f-2568c85b8c74/8ec7a465-151e-4ac3-92a7-965ecf854501/a9ab832f- c4f2-4b9b-9d99-6393fd995979'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8ec7a465-151e-4ac3-92a7- 965ecf854501</serial> <alias name='ua-8ec7a465-151e-4ac3-92a7- 965ecf854501'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> When you start the HE, it might complain of that missing so you have to create it. If it complains about network vdsm-ovirtmgmt missing, you can also define it via virsh: # cat vdsm-ovirtmgmt.xml <network> <name>vdsm-ovirtmgmt</name> <uuid>8ded486e-e681-4754-af4b-5737c2b05405</uuid> <forward mode='bridge'/> <bridge name='ovirtmgmt'/> </network> Best Regards, Strahil Nikolov

Following document to redploy engine... https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... #### From Host which had listed engine as in its inventory ### [root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Destroy hosted-engine VM ===- error: failed to get domain 'HostedEngine' -=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- -=== De-configure VDSM networks ===- ovirtmgmt A previously configured management bridge has been found on the system, this will try to de-configure it. Under certain circumstances you can loose network connection. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing - removing /etc/ovirt-hosted-engine/hosted-engine.conf - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@medusa ~]# [root@medusa ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. <snip out just errors as this forum does not allow attachments.. I pasted full output below to keep clutter down> 1) Error about firewall [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'' failed. The error was: error while evaluating conditional (firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: firewalld_s\n - name: Enforce firewalld status\n ^ here\n"} ### Hmm.. that is dumb.. its disabled to avoid issues [root@medusa ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) 2) Error about ssh to host ovirte01.penguinpages.local [ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": true} ###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should be offline till it does it. And as VMs to run DNS are down.. I am using hosts file to ignite the enviornment. Not sure what it expects [root@medusa ~]# cat /etc/hosts |grep ovir 172.16.100.31 ovirte01.penguinpages.local ovirte01 Did not go well. Attached is deployment details as well as logs. Maybe someone can point out what I am doing wrong. Last time I did this I did the HCI wizard.. but the hosted engine dashboard for "Virtualization" in cockpit https://172.16.100.101:9090/ovirt-dashboard#/he no longer offers a deployment UI option. ###################### Deployment attempt full details.. most are just <enter> for defaults ###### [root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Destroy hosted-engine VM ===- error: failed to get domain 'HostedEngine' -=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- -=== De-configure VDSM networks ===- ovirtmgmt A previously configured management bridge has been found on the system, this will try to de-configure it. Under certain circumstances you can loose network connection. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing - removing /etc/ovirt-hosted-engine/hosted-engine.conf - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@medusa ~]# rm -rf storage location/* [root@medusa ~]# screen [root@medusa ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20210118102601-42olbk.log Version: otopi-1.9.2 (otopi-1.9.2-1.el8) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup (late) [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- --== HOST NETWORK CONFIGURATION ==-- [ INFO ] Bridge ovirtmgmt already created Please indicate the gateway IP address [172.16.100.1]: [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on existing management bridge] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network interfaces] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter unsupported interface types] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Failed if only teaming devices are available] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exist] [ INFO ] skipping: [localhost] Please indicate a nic to set ovirtmgmt bridge on: (enp7s0) [enp7s0]: Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]: --== VM CONFIGURATION ==-- Please enter the name of the datacenter where you want to deploy this hosted-engine host. [Default]: Please enter the name of the cluster where you want to deploy this hosted-engine host. [Default]: If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use (leave it empty to skip, the setup will use ovirt-engine-appliance rpm installing it if missing): Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: [ INFO ] Detecting host timezone. Please provide the FQDN you would like to use for the engine. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: []: ovirte01.penguinpages.local Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [penguinpages.local] Enter root password that will be used for the engine appliance: Confirm appliance root password: Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): [WARNING] Skipping appliance root ssh public key Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: Do you want to apply a default OpenSCAP security profile (Yes, No) [No]: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:75:bb:ad]: How should the engine VM network be configured (DHCP, Static)[DHCP]? static Please enter the IP address to be used for the engine VM []: 172.16.100.31 [ INFO ] The engine VM will be configured to use 172.16.100.31/24 Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip) [172.16.100.40,8.8.8.8]: 8.8.8.8,172.16.100.40 Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No] --== HOSTED ENGINE CONFIGURATION ==-- Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: Enter engine admin password: Confirm engine admin password: [ INFO ] Stage: Setup validation Please provide the hostname of this host on the management network [medusa.penguinpages.local]: [WARNING] Failed to resolve medusa.penguinpages.local using DNS, it can be resolved only locally [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration (early) [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Cleaning previous attempts [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Install oVirt Hosted Engine packages] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : System configuration validations] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on existing management bridge] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network interfaces] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter unsupported interface types] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Failed if only teaming devices are available] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exist] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Prepare getent key] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get full hostname] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set hostname variable if not defined] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Define host address variable if not defined] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Prepare getent key] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get host address resolution] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check address resolution] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse host address resolution] [ INFO ] ok: [localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if host's ip is empty] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Avoid localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure host address resolves locally] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get target address from selected interface (IPv4)] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get target address from selected interface (IPv6)] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the selected interface] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for alias] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter resolved address list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure the resolved address resolves only on the selected interface] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Avoid localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get engine FQDN resolution] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine he_fqdn resolution] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse engine he_fqdn resolution] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure engine he_fqdn doesn't resolve locally] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check http/https proxy] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get domain name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_cloud_init_domain_name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Define he_cloud_init_host_name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_vm_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_nic_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_cdrom_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : get timezone] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_time_zone] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Data Center name format is incorrect] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate Cluster name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check firewalld status] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce firewalld status] [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'' failed. The error was: error while evaluating conditional (firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: firewalld_s\n - name: Enforce firewalld status\n ^ here\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch the value of HOST_KEY_CHECKING] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get the username running the deploy] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Register the engine FQDN as a host] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine] [ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": true} [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Clean local storage pools] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20210118103412.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20210118102601-42olbk.log [root@medusa ~]#

After looking into logs.. I think issue is about storage where it should deploy. Wizard did not seem to focus on that.. I A$$umed it was aware of volume per previous detected deployment... but... #### 2021-01-18 10:34:07,917-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Clean local storage pools] 2021-01-18 10:34:08,418-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 ok: [localhost] 2021-01-18 10:34:08,919-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}] 2021-01-18 10:34:09,320-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'msg': 'Unexpected templating type error occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} pool-destroy {{ he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not NoneType', '_ansible_no_log': False} 2021-01-18 10:34:09,421-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} pool-destroy {{ he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not NoneType"} 2021-01-18 10:34:09,821-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}] 2021-01-18 10:34:10,223-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'msg': 'Unexpected templating type error occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} pool-undefine {{ he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not NoneType', '_ansible_no_log': False} 2021-01-18 10:34:10,323-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} pool-undefine {{ he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not NoneType"} 2021-01-18 10:34:10,724-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}] 2021-01-18 10:34:11,125-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'msg': 'The task includes an option with an undefined variable. The error was: \'local_vm_disk_path\' is undefined\n\nThe error appears to be in \'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\': line 16, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Destroy local storage-pool {{ local_vm_disk_path.split(\'/\')[5] }}\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - "{{ foo }}"\n', '_ansible_no_log': False} 2021-01-18 10:34:11,226-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'local_vm_disk_path' is undefined\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml': line 16, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"} 2021-01-18 10:34:11,626-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}] 2021-01-18 10:34:12,028-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'msg': 'The task includes an option with an undefined variable. The error was: \'local_vm_disk_path\' is undefined\n\nThe error appears to be in \'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\': line 22, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Undefine local storage-pool {{ local_vm_disk_path.split(\'/\')[5] }}\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - "{{ foo }}"\n', '_ansible_no_log': False} 2021-01-18 10:34:12,128-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'local_vm_disk_path' is undefined\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml': line 22, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n changed_when: true\n - name: Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\n with_items:\n - {{ foo }}\n\nShould be written as:\n\n with_items:\n - \"{{ foo }}\"\n"} 2021-01-18 10:34:12,529-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 PLAY RECAP [localhost] : ok: 22 changed: 4 unreachable: 1 skipped: 3 failed: 0 2021-01-18 10:34:12,630-0500 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:222 ansible-playbook rc: 0 ### [root@medusa ~]# mount |grep engine /dev/mapper/gluster_vg_sdb-gluster_lv_engine on /gluster_bricks/engine type xfs (rw,noatime,nodiratime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota,_netdev,x-systemd.requires=vdo.service) medusast.penguinpages.local:/engine on /media/engine type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072,_netdev) [root@medusa engine]# tree . ├── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde │ ├── dom_md │ │ ├── ids │ │ ├── inbox │ │ ├── leases │ │ ├── metadata │ │ ├── outbox │ │ └── xleases │ ├── ha_agent │ │ ├── hosted-engine.lockspace -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ └── hosted-engine.metadata -> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027 │ ├── images │ │ ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc │ │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480 │ │ │ ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease │ │ │ └── e4e26573-09a5-43fa-91ec-37d12de46480.meta │ │ ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6 │ │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a │ │ │ ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease │ │ │ └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta │ │ ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d │ │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c │ │ │ ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease │ │ │ └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta │ │ ├── 685309b1-1ae9-45f3-90c3-d719a594482d │ │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0 │ │ │ ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease │ │ │ └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta │ │ ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e │ │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af │ │ │ ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease │ │ │ └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta │ │ └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23 │ │ ├── 38d552c5-689d-47b7-9eea-adb308da8027 │ │ ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease │ │ └── 38d552c5-689d-47b7-9eea-adb308da8027.meta │ └── master │ ├── tasks │ │ ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba │ │ │ └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0 │ │ ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp │ │ ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp │ │ ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup │ │ │ └── 2e0e347c-fd01-404f-9459-ef175c82c354.task │ │ ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup │ │ │ └── 43f17022-e003-4e9f-81ec-4a01582223bd.task │ │ ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp │ │ ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c │ │ │ └── 6826c8f5-b9df-498e-a576-af0c4e7fe69c.task │ │ ├── 78ed90b0-2a87-4c48-8204-03d4b0bd7694 │ │ │ └── 78ed90b0-2a87-4c48-8204-03d4b0bd7694.job.0 │ │ ├── 7c7799a5-d28e-4b42-86ee-84bb8822e82f.temp │ │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893 │ │ ├── 95d29b8c-23d9-4d1a-b995-2ba364970893.temp │ │ ├── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.backup │ │ │ └── a1fa934a-5ea7-4160-ab8c-7e3476dc2676.task │ │ ├── bcee8725-efde-4848-a108-01c262625aaa │ │ │ └── bcee8725-efde-4848-a108-01c262625aaa.job.0 │ │ ├── c0b5a032-c4a9-4648-b348-c2a5cf4d6cad.temp │ │ ├── ce7e2ebf-2c28-435d-b359-14d0da2e9011 │ │ └── ce7e2ebf-2c28-435d-b359-14d0da2e9011.temp │ └── vms ├── wget-log ├── wget-log.1 ├── wget-log.2 └── wget-log.3 29 directories, 37 files [root@medusa engine]# ## Question: 1) Why does engine not prompt for data deployment location?... looking at log of deployment .. "...The error was: \'local_vm_disk_path\' is undefined\..."

I think that it's complaining for the firewall. Try to restore with running firewalld. Best Regards, Strahil Nikolov В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages <jeremey.wise@gmail.com> написа: Following document to redploy engine... https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/htm... #### From Host which had listed engine as in its inventory ### [root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Destroy hosted-engine VM ===- error: failed to get domain 'HostedEngine' -=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- -=== De-configure VDSM networks ===- ovirtmgmt A previously configured management bridge has been found on the system, this will try to de-configure it. Under certain circumstances you can loose network connection. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing - removing /etc/ovirt-hosted-engine/hosted-engine.conf - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@medusa ~]# [root@medusa ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. <snip out just errors as this forum does not allow attachments.. I pasted full output below to keep clutter down> 1) Error about firewall [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'' failed. The error was: error while evaluating conditional (firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: firewalld_s\n - name: Enforce firewalld status\n ^ here\n"} ### Hmm.. that is dumb.. its disabled to avoid issues [root@medusa ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) 2) Error about ssh to host ovirte01.penguinpages.local [ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": true} ###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should be offline till it does it. And as VMs to run DNS are down.. I am using hosts file to ignite the enviornment. Not sure what it expects [root@medusa ~]# cat /etc/hosts |grep ovir 172.16.100.31 ovirte01.penguinpages.local ovirte01 Did not go well. Attached is deployment details as well as logs. Maybe someone can point out what I am doing wrong. Last time I did this I did the HCI wizard.. but the hosted engine dashboard for "Virtualization" in cockpit https://172.16.100.101:9090/ovirt-dashboard#/he no longer offers a deployment UI option. ###################### Deployment attempt full details.. most are just <enter> for defaults ###### [root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Destroy hosted-engine VM ===- error: failed to get domain 'HostedEngine' -=== Stop HA services ===- -=== Shutdown sanlock ===- shutdown force 1 wait 0 shutdown done 0 -=== Disconnecting the hosted-engine storage domain ===- -=== De-configure VDSM networks ===- ovirtmgmt A previously configured management bridge has been found on the system, this will try to de-configure it. Under certain circumstances you can loose network connection. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n] y -=== Stop other services ===- Warning: Stopping libvirtd.service, but it can still be activated by: libvirtd.socket libvirtd-ro.socket libvirtd-admin.socket -=== De-configure external daemons ===- Removing database file /var/lib/vdsm/storage/managedvolume.db -=== Removing configuration files ===- ? /etc/init/libvirtd.conf already missing - removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml ? /etc/ovirt-hosted-engine/answers.conf already missing - removing /etc/ovirt-hosted-engine/hosted-engine.conf - removing /etc/vdsm/vdsm.conf - removing /etc/pki/vdsm/certs/cacert.pem - removing /etc/pki/vdsm/certs/vdsmcert.pem - removing /etc/pki/vdsm/keys/vdsmkey.pem - removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem - removing /etc/pki/vdsm/libvirt-migrate/server-key.pem - removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-cert.pem - removing /etc/pki/vdsm/libvirt-spice/server-key.pem - removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem - removing /etc/pki/vdsm/libvirt-vnc/server-key.pem - removing /etc/pki/CA/cacert.pem - removing /etc/pki/libvirt/clientcert.pem - removing /etc/pki/libvirt/private/clientkey.pem ? /etc/pki/ovirt-vmconsole/*.pem already missing - removing /var/cache/libvirt/qemu ? /var/run/ovirt-hosted-engine-ha/* already missing ? /var/tmp/localvm* already missing -=== Removing IP Rules ===- [root@medusa ~]# rm -rf storage location/* [root@medusa ~]# screen [root@medusa ~]# hosted-engine --deploy [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. Continuing will configure this host for serving as hypervisor and will create a local VM with a running engine. The locally running engine will be used to configure a new storage domain and create a VM there. At the end the disk of the local VM will be moved to the shared storage. Are you sure you want to continue? (Yes, No)[Yes]: yes Configuration files: Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20210118102601-42olbk.log Version: otopi-1.9.2 (otopi-1.9.2-1.el8) [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup (late) [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- --== HOST NETWORK CONFIGURATION ==-- [ INFO ] Bridge ovirtmgmt already created Please indicate the gateway IP address [172.16.100.1]: [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on existing management bridge] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network interfaces] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter unsupported interface types] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Failed if only teaming devices are available] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exist] [ INFO ] skipping: [localhost] Please indicate a nic to set ovirtmgmt bridge on: (enp7s0) [enp7s0]: Please specify which way the network connectivity should be checked (ping, dns, tcp, none) [dns]: --== VM CONFIGURATION ==-- Please enter the name of the datacenter where you want to deploy this hosted-engine host. [Default]: Please enter the name of the cluster where you want to deploy this hosted-engine host. [Default]: If you want to deploy with a custom engine appliance image, please specify the path to the OVA archive you would like to use (leave it empty to skip, the setup will use ovirt-engine-appliance rpm installing it if missing): Please specify the number of virtual CPUs for the VM (Defaults to appliance OVF value): [4]: Please specify the memory size of the VM in MB (Defaults to appliance OVF value): [16384]: [ INFO ] Detecting host timezone. Please provide the FQDN you would like to use for the engine. Note: This will be the FQDN of the engine VM you are now going to launch, it should not point to the base host or to any other existing machine. Engine VM FQDN: []: ovirte01.penguinpages.local Please provide the domain name you would like to use for the engine appliance. Engine VM domain: [penguinpages.local] Enter root password that will be used for the engine appliance: Confirm appliance root password: Enter ssh public key for the root user that will be used for the engine appliance (leave it empty to skip): [WARNING] Skipping appliance root ssh public key Do you want to enable ssh access for the root user (yes, no, without-password) [yes]: Do you want to apply a default OpenSCAP security profile (Yes, No) [No]: You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:75:bb:ad]: How should the engine VM network be configured (DHCP, Static)[DHCP]? static Please enter the IP address to be used for the engine VM []: 172.16.100.31 [ INFO ] The engine VM will be configured to use 172.16.100.31/24 Please provide a comma-separated list (max 3) of IP addresses of domain name servers for the engine VM Engine VM DNS (leave it empty to skip) [172.16.100.40,8.8.8.8]: 8.8.8.8,172.16.100.40 Add lines for the appliance itself and for this host to /etc/hosts on the engine VM? Note: ensuring that this host could resolve the engine VM hostname is still up to you (Yes, No)[No] --== HOSTED ENGINE CONFIGURATION ==-- Please provide the name of the SMTP server through which we will send notifications [localhost]: Please provide the TCP port number of the SMTP server [25]: Please provide the email address from which notifications will be sent [root@localhost]: Please provide a comma-separated list of email addresses which will get notifications [root@localhost]: Enter engine admin password: Confirm engine admin password: [ INFO ] Stage: Setup validation Please provide the hostname of this host on the management network [medusa.penguinpages.local]: [WARNING] Failed to resolve medusa.penguinpages.local using DNS, it can be resolved only locally [ INFO ] Stage: Transaction setup [ INFO ] Stage: Misc configuration (early) [ INFO ] Stage: Package installation [ INFO ] Stage: Misc configuration [ INFO ] Stage: Transaction commit [ INFO ] Stage: Closing up [ INFO ] Cleaning previous attempts [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Install oVirt Hosted Engine packages] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : System configuration validations] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Detecting interface on existing management bridge] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get all active network interfaces] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter bonds with bad naming] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Generate output list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Collect interface types] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for Team devices] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get list of Team devices] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter unsupported interface types] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Failed if only teaming devices are available] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate selected bridge interface if management bridge does not exist] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Prepare getent key] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get full hostname] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set hostname variable if not defined] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Define host address variable if not defined] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if he_force_ip4 and he_force_ip6 are set at the same time] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Prepare getent key] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get host address resolution] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check address resolution] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse host address resolution] [ INFO ] ok: [localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if host's ip is empty] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Avoid localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure host address resolves locally] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get target address from selected interface (IPv4)] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get target address from selected interface (IPv6)] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check the resolved address resolves on the selected interface] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check for alias] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Filter resolved address list] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure the resolved address resolves only on the selected interface] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Avoid localhost] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get engine FQDN resolution] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check engine he_fqdn resolution] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Parse engine he_fqdn resolution] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Ensure engine he_fqdn doesn't resolve locally] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check http/https proxy] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get domain name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_cloud_init_domain_name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Define he_cloud_init_host_name] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_vm_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_nic_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get uuid] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_cdrom_uuid] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : get timezone] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set he_time_zone] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fail if Data Center name format is incorrect] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Validate Cluster name] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Check firewalld status] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Enforce firewalld status] [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'' failed. The error was: error while evaluating conditional (firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be in '/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml': line 8, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: firewalld_s\n - name: Enforce firewalld status\n ^ here\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Execute just a specific set of steps] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Force facts gathering] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch the value of HOST_KEY_CHECKING] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Get the username running the deploy] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Register the engine FQDN as a host] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine] [ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": true} [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Fetch logs from the engine VM] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path] [ INFO ] skipping: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Remove temporary entry in /etc/hosts for the local VM] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Clean local storage pools] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ he_local_vm_dir | basename }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ he_local_vm_dir | basename }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20210118103412.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20210118102601-42olbk.log [root@medusa ~]# _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UPTUF2UUMMHTMI...
participants (4)
-
Alex K
-
Michael Jones
-
penguin pages
-
Strahil Nikolov