Unable to remove host from ovirt engine
by Joseph Gelinas
Recently our host and ovirt engine certificates expired and with some ideas from Strahil we were able to get 2 of the 3 ovirt hosts updated with usable certificates and move all of our VMs to those two nodes.
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/QCFPKQ3OKPOU...
Not having any luck with the last host we figured we'd just try to remove it from ovirt engine and re-add it. While it seems `hosted-engine --vm-status` on one node no longer shows the removed host, the other good host and the web interface still show ovirt-1 in the mix. What is the best way to remove a NonRespnsive host from ovirt and re-add it?
[root@ovirt-1 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable.
[root@ovirt-2 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt-3.xxxxx.com (id: 2) status ==--
Host ID : 2
Host timestamp : 12515451
Score : 3274
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-3.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 9cf92792
conf_on_shared_storage : True
local_conf_timestamp : 12515451
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12515451 (Mon Apr 25 14:08:51 2022)
host-id=2
score=3274
vm_conf_refresh_time=12515451 (Mon Apr 25 14:08:51 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt-2.xxxxx.com (id: 3) status ==--
Host ID : 3
Host timestamp : 12513269
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : ovirt-2.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 4a89d706
conf_on_shared_storage : True
local_conf_timestamp : 12513269
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12513269 (Mon Apr 25 14:09:00 2022)
host-id=3
score=3400
vm_conf_refresh_time=12513269 (Mon Apr 25 14:09:00 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
[root@ovirt-3 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt-1.xxxxx.com (id: 1) status ==--
Host ID : 1
Host timestamp : 6750990
Score : 0
Engine status : unknown stale-data
Hostname : ovirt-1.xxxxx.com
Local maintenance : False
stopped : True
crc32 : 5290657b
conf_on_shared_storage : True
local_conf_timestamp : 6750950
Status up-to-date : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=6750990 (Thu Feb 17 22:17:53 2022)
host-id=1
score=0
vm_conf_refresh_time=6750950 (Thu Feb 17 22:17:12 2022)
conf_on_shared_storage=True
maintenance=False
state=AgentStopped
stopped=True
--== Host ovirt-3.xxxxx.com (id: 2) status ==--
Host ID : 2
Host timestamp : 12515501
Score : 3279
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Hostname : ovirt-3.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 0845cd93
conf_on_shared_storage : True
local_conf_timestamp : 12515501
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12515501 (Mon Apr 25 14:09:42 2022)
host-id=2
score=3279
vm_conf_refresh_time=12515501 (Mon Apr 25 14:09:42 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
--== Host ovirt-2.xxxxx.com (id: 3) status ==--
Host ID : 3
Host timestamp : 12513309
Score : 3400
Engine status : {"vm": "up", "health": "good", "detail": "Up"}
Hostname : ovirt-2.xxxxx.com
Local maintenance : False
stopped : False
crc32 : 237726e0
conf_on_shared_storage : True
local_conf_timestamp : 12513309
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=12513309 (Mon Apr 25 14:09:39 2022)
host-id=3
score=3400
vm_conf_refresh_time=12513309 (Mon Apr 25 14:09:39 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
2 years, 6 months
Issue upgrading 4.4 to 4.5 Gluster HCG
by Abe E
Hey All,
I am having an issue upgrading from 4.4 to 4.5.
My setup
3 Node Gluster (Cluster 1) + 3 Node Cluster (Cluster 2)
If i recall the process correctly, the process I did last week:
On all my Nodes:
dnf install -y centos-release-ovirt45 --enablerepo=extras
On Ovirt Engine:
dnf install -y centos-release-ovirt45
dnf update -y --nobest
engine-setup
Once the engine was upgraded successfully I ran the upgrade from the GUI on the Cluster 2 Nodes one by one although when they came back, they complained of "Host failed to attach one of the Storage Domains attached to it." which is the "hosted_storage", "data" (gluster).
I thought maybe its due to the fact that 4.5 brings an update to the glusterfs version, so I decided to upgrade Node 3 in my Gluster Cluster and it booted to emergency mode after the install "succeeded".
I feel like I did something wrong, aside from my bravery of upgrading so much before realizing somethings not right.
My VDSM Logs from one of the nodes that fails to connect to storage (FYI I have 2 Networks, one for Mgmt and 1 for storage that are up):
[root@ovirt-4 ~]# tail -f /var/log/vdsm/vdsm.log
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=8370855e-dea6-4168-870a-d6235d9044e9 (api:54)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:48)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:54)
2022-04-25 22:41:31,602-0600 INFO (periodic/1) [vdsm.api] START repoStats(domains=()) from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:48)
2022-04-25 22:41:31,603-0600 INFO (periodic/1) [vdsm.api] FINISH repoStats return={} from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:54)
2022-04-25 22:41:31,606-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:46,530-0600 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:54)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:48)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:54)
2022-04-25 22:41:46,574-0600 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:46,651-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:48)
2022-04-25 22:41:46,652-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:54)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:52,533-0600 INFO (jsonrpc/4) [api.host] START getCapabilities() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:55,037-0600 INFO (jsonrpc/4) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=b'' (hooks:123)
2022-04-25 22:41:55,039-0600 INFO (jsonrpc/4) [api.host] FINISH getCapabilities return={'status': {'code': 0, 'message': 'Done'}, 'info': {'kvmEnabled': 'true', 'cpuCores': '6', 'cpuThreads': '12', 'cpuSockets': '1', 'onlineCpus': '0,1,2,3,4,5,6,7,8,9,10,11', 'cpuTopology': [{'cpu_id': 0, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 1, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 2, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 3, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 4, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 5, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}, {'cpu_id': 6, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 7, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 8, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 9, 'numa_cell_id'
: 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 10, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 11, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}], 'cpuSpeed': '2500.000', 'cpuModel': 'Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz', 'cpuFlags': 'pdcm,xsaveopt,dtes64,xtpr,clflush,de,ibpb,popcnt,cpuid,ida,monitor,amd-stibp,x2apic,lm,arat,pse36,tsc_deadline_timer,fxsr,ht,skip-l1dfl-vmentry,est,pcid,aperfmperf,nopl,apic,mce,xsave,ibrs,flush_l1d,dtherm,dts,flexpriority,pse,pdpe1gb,pni,sse2,pge,cx16,pschange-mc-no,bts,rdtscp,dca,avx,hypervisor,tsc,tsc_adjust,nx,mmx,pebs,ss,umip,xtopology,vnmi,arch-capabilities,pae,pclmulqdq,tm,aes,invtsc,md_clear,ssse3,amd-ssbd,ssbd,sse4_1,smx,rep_good,vmx,cx8,sse,arch_perfmon,msr,stibp,nonstop_tsc,pti,ds_cpl,mca,cmov,md-clear,fpu,lahf_lm,tm2,sep,tpr_shadow,constant_tsc,pbe,pat,syscall,sse4_2,pln,acpi,mtrr,pts,vme,ept,vpid,spec_ctrl,model_pentium,model_Nehalem,model_486,model_SandyBridge,model_pentium2,
model_Opteron_G1,model_Nehalem-IBRS,model_qemu32,model_kvm32,model_coreduo,model_Westmere,model_SandyBridge-IBRS,model_Westmere-IBRS,model_Penryn,model_pentium3,model_qemu64,model_Conroe,model_kvm64,model_core2duo,model_Opteron_G2', 'vdsmToCpusAffinity': [1], 'version_name': 'Snow Man', 'software_version': '4.50.0.13', 'software_revision': '1', 'supportedENGINEs': ['4.2', '4.3', '4.4', '4.5'], 'clusterLevels': ['4.2', '4.3', '4.4', '4.5', '4.6', '4.7'], 'networks': {'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'iface': 'ovirtmgmt', 'bridged': True, 'addr': '172.17.117.74', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'ipv4defaultroute': True, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0', 'dhcpv4': False, 'dhcpv6': True}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'iface': 'LabNet-v106', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6ad
drs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.106', 'vlanid': 106, 'dhcpv4': False, 'dhcpv6': False}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'iface': 'PIP_V991', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.991', 'vlanid': 991, 'dhcpv4': False, 'dhcpv6': False}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'iface': 'NetEng-V3101', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.3101', 'vlanid': 3101, 'dhcpv4': False, 'dhcpv6': False}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'iface': 'OVIRT-VMs', 'bridged
': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.177', 'vlanid': 177, 'dhcpv4': False, 'dhcpv6': False}, 'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'iface': 'Gluster_Net', 'bridged': True, 'addr': '172.17.181.13', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond1', 'dhcpv4': False, 'dhcpv6': False}}, 'bondings': {'bond0': {'hwaddr': 'c8:1f:66:f6:e5:48', 'slaves': ['eno1', 'eno2'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'm
tu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}, 'bond1': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'slaves': ['eno4', 'eno3'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}}, 'bridges': {'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'addr': '172.17.181.13', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_ip
tables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e54a', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e54a', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_
stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'addr': '172.17.117.74', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': True, 'dhcpv4': False, 'dhcpv6': True, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_r
esponse_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '1
8247', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_i
nterval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '6208', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0'
, 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0
', 'gc_timer': '15686', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multi
cast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '20761', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats
_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}}, 'nics': {'eno3': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4a', 'dhcpv4': False, 'dhcpv6':
False, 'speed': 1000}, 'eno4': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4b', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno1': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:48', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno2': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:49', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}}, 'vlans': {'bond0.106': {'iface': 'bo
nd0', 'vlanid': 106, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.991': {'iface': 'bond0', 'vlanid': 991, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.177': {'iface': 'bond0', 'vlanid': 177, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.3101': {'iface': 'bond0', 'vlanid': 3101, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}}, 'nameservers': ['8.8.8.8'], 'supportsIPv6':
True, 'ovnConfigured': False, 'hooks': {'before_vm_start': {'50_hostedengine': {'checksum': 'e5f5262cf22e06cd34e227afb27647e479351266876019a64210dbcbd2a43830'}}, 'after_get_caps': {'ovirt_provider_ovn_hook': {'checksum': 'a2bdefca38b96c8ddab39822cc8282bf3f67d875c4879003ffc9661826c92421'}}, 'before_device_create': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}, 'before_nic_hotplug': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}}, 'operatingSystem': {'release': '1.el8', 'version': '8.6.2203.0', 'name': 'RHEL', 'pretty_name': 'oVirt Node 4.5.0'}, 'uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'packages2': {'kernel': {'version': '4.18.0', 'release': '373.el8.x86_64'}, 'glusterfs-cli': {'version': '10.1', 'release': '1.el8s'}, 'librbd1': {'version': '16.2.7', 'release': '1.el8s'}, 'libvirt': {'version': '8.0.0', 'release': '2.module_el8.6.0+1087+b42c8331'}
, 'mom': {'version': '0.6.2', 'release': '1.el8'}, 'ovirt-hosted-engine-ha': {'version': '2.5.0', 'release': '1.el8'}, 'openvswitch': {'version': '2.15', 'release': '3.el8'}, 'nmstate': {'version': '1.2.1', 'release': '1.el8'}, 'qemu-img': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'qemu-kvm': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'spice-server': {'version': '0.14.3', 'release': '4.el8'}, 'vdsm': {'version': '4.50.0.13', 'release': '1.el8'}, 'glusterfs': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-fuse': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-geo-replication': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-server': {'version': '10.1', 'release': '1.el8s'}}, 'realtimeKernel': False, 'kernelArgs': 'BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.5.0-0.20220420.0+1/vmlinuz-4.18.0-373.el8.x86_64 crashkernel=auto resume=/dev/mapper/onn-swap rd.lvm.lv=onn/ovirt-node-ng-4.5.0-0.20220420.0+1 rd.lvm.lv=onn/swap rhgb q
uiet kvm-intel.nested=1 root=/dev/onn/ovirt-node-ng-4.5.0-0.20220420.0+1 boot=UUID=adb2035d-5047-471d-8b51-206e0afb39f4 rootflags=discard img.bootid=ovirt-node-ng-4.5.0-0.20220420.0+1', 'nestedVirtualization': True, 'emulatedMachines': ['pc-q35-rhel8.6.0', 'pc-i440fx-rhel7.1.0', 'pc-q35-rhel8.2.0', 'pc-q35-rhel7.6.0', 'pc-i440fx-rhel7.3.0', 'pc-i440fx-rhel7.6.0', 'pc-q35-rhel8.5.0', 'pc-q35-rhel8.0.0', 'pc-i440fx-rhel7.2.0', 'pc', 'pc-q35-rhel7.3.0', 'pc-i440fx-rhel7.4.0', 'q35', 'pc-i440fx-2.11', 'pc-q35-rhel7.4.0', 'pc-i440fx-rhel7.5.0', 'pc-i440fx-rhel7.0.0', 'pc-q35-rhel7.5.0', 'pc-i440fx-4.2', 'pc-q35-rhel8.3.0', 'pc-q35-rhel8.1.0', 'pc-q35-rhel8.4.0'], 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb'}], 'FC': []}, 'vmTypes': ['kvm'], 'memSize': '43996', 'reservedMem': '321', 'guestOverhead': '65', 'rngSources': ['random', 'hwrng'], 'numaNodes': {'0': {'totalMemory': '43996', 'hugepag
es': {'4': {'totalPages': '11263209'}, '2048': {'totalPages': '0'}, '1048576': {'totalPages': '0'}}, 'cpus': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]}}, 'numaNodeDistance': {'0': [10]}, 'autoNumaBalancing': 2, 'selinux': {'mode': '1'}, 'liveSnapshot': 'true', 'liveMerge': 'true', 'kdumpStatus': 0, 'deferred_preallocation': True, 'hostdevPassthrough': 'false', 'additionalFeatures': ['libgfapi_supported', 'GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'hostedEngineDeployed': False, 'hugepages': [2048, 1048576], 'kernelFeatures': {'SPECTRE_V2': '(Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling)', 'ITLB_MULTIHIT': '(KVM: Mitigation: VMX disabled)', 'MDS': '(Mitigation: Clear CPU buffers; SMT vulnerable)', 'L1TF': '(Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable)', 'SPEC_STORE_BYPASS': '(Mitigation: Speculative Store Bypass disabled via prctl and seccomp)', 'TSX_ASYNC_ABORT': '(Not affec
ted)', 'SPECTRE_V1': '(Mitigation: usercopy/swapgs barriers and __user pointer sanitization)', 'SRBDS': '(Not affected)', 'MELTDOWN': '(Mitigation: PTI)'}, 'vncEncrypted': True, 'backupEnabled': True, 'coldBackupEnabled': True, 'clearBitmapsEnabled': True, 'fipsEnabled': False, 'boot_uuid': 'adb2035d-5047-471d-8b51-206e0afb39f4', 'tscFrequency': '1999999000', 'tscScaling': False, 'connector_info': {'platform': 'x86_64', 'os_type': 'linux', 'ip': None, 'host': 'ovirt-4.[removed].com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'do_local_attach': False, 'uuid': '215601b1-e536-4258-ad35-d1f869afa0f8', 'system uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000', 'found_dsc': ''}, 'domain_versions': [0, 2, 3, 4, 5], 'supported_block_size': {'FCP': [512], 'GLUSTERFS': [0, 512, 4096], 'ISCSI': [512], 'LOCALFS': [0, 512, 4096], 'NFS': [512], 'POSIXFS': [512]}, 'cd_change_pdiv': True, 'refres
h_disk_supported': True, 'replicate_extend': True, 'measure_subchain': True, 'measure_active': True, 'mailbox_events': True, 'netConfigDirty': 'False', 'openstack_binding_host_ids': {'OVIRT_PROVIDER_OVN': 'eaa82268-bd08-453f-9953-b4aad4c4c307'}, 'lastClientIface': 'ovirtmgmt'}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,046-0600 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities took more than 1.00 seconds to succeed: 2.51 (__init__:316)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] START getHardwareInfo() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] FINISH getHardwareInfo return={'status': {'code': 0, 'message': 'Done'}, 'info': {'systemManufacturer': ' ', 'systemProductName': ' ', 'systemVersion': '', 'systemSerialNumber': 'HSXRV12', 'systemUUID': '4C4C4544-0053-5810-8052-C8C04F563132', 'systemFamily': ''}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,121-0600 INFO (jsonrpc/3) [api.host] START getStats() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:54)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:48)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:54)
2022-04-25 22:41:55,166-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:54)
2022-04-25 22:41:55,346-0600 INFO (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/engine', 'ipv6_enabled': 'false', 'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:48)
2022-04-25 22:41:55,478-0600 ERROR (jsonrpc/5) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>engine</name>\n <id>51bb4ddb-dfbc-4376-85cd-d7070e287946</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/engine/engine<name>gluster-1.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArb
iter></brick>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/engine/engine<name>gluster-2.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/engine/engine<name>gluster-3.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n
<option>\n <name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>clu
ster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</n
ame>\n <value>auto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n
<value>on</value>\n </option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:54)
2022-04-25 22:41:55,515-0600 INFO (jsonrpc/2) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/data', 'ipv6_enabled': 'false', 'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:48)
2022-04-25 22:41:55,647-0600 ERROR (jsonrpc/2) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>data</name>\n <id>06ce0d34-b4b4-472c-9cec-24ffe934ed05</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/data/data<name>gluster-1.[removed].com:/gluster_bricks/data/data</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArbiter></bri
ck>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/data/data<name>gluster-2.[removed].com:/gluster_bricks/data/data</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/data/data<name>gluster-3.[removed].com:/gluster_bricks/data/data</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <
name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>cluster.choose-local</name>\n
<value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>a
uto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:54)
2022-04-25 22:41:55,682-0600 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-2.[removed].com:/nfs2/data', 'ipv6_enabled': 'false', 'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'user': '', 'tpgt': '1'}, {'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-1.[removed].com:/nfs1/data', 'ipv6_enabled': 'false', 'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:48)
2022-04-25 22:41:55,688-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain ddeb67aa-9ec8-488b-9632-5cc19a244815 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain e1ae9b1a-7aa4-4072-b92e-5e967f5a2ee7 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'status': 0}, {'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'status': 0}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:54)
2022-04-25 22:42:01,696-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:48)
2022-04-25 22:42:01,697-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:54)
2022-04-25 22:42:04,539-0600 INFO (jsonrpc/6) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:54)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:48)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:54)
2022-04-25 22:42:04,583-0600 INFO (jsonrpc/6) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2 years, 6 months
Re: 4.5.0.1 ovirt hosted-engine deployment fails
by Strahil Nikolov
Obviously ens192 is not the good interface. Check where the IP is added and use that address.
Best Regards,Strahil Nikolov
On Fri, Apr 29, 2022 at 16:48, Mohamed Roushdy<mohamedroushdy(a)peopleintouch.com> wrote: <!--#yiv8236219045 _filtered {} _filtered {}#yiv8236219045 #yiv8236219045 p.yiv8236219045MsoNormal, #yiv8236219045 li.yiv8236219045MsoNormal, #yiv8236219045 div.yiv8236219045MsoNormal {margin:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv8236219045 span.yiv8236219045EmailStyle17 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv8236219045 .yiv8236219045MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv8236219045 div.yiv8236219045WordSection1 {}-->
Hello,
I’ve researched a bit about this problem, but none of the proposed solutions fixed it. I’m trying to deploy Ovirt 4.5.0.1 in my lab, and the installation fails with the following error:
I’ve even tried to delete the default network bridge (as suggested in some articles), but this didn’t help either. The node has 3 network interfaces, and the hosts file points only to the management interface.
Thank you,
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4TY77OAA7Q...
2 years, 6 months
Webadmin and API down with error 500 after updating from 4.4 to 4.5
by lists@pequod.io
Hello,
today i updated from 4.4 to 4.5 and i am no longer able to access my oVirt Cluster. Accessing oVirt Web Interface fails with "500 - Internal Server Error". The API is also dead, My Backup Software and Foreman is no longer able to talk to oVirt.
I rebooted the host an ran engine-setup again, it completed without issues but Engine ist still dead. If i ran it again, it tells me now that my cluster is not in global maintanance mode, but "hosted-engine --vm-status" tells me it is still in maintanance mode.
one suspect thing i found in server.log is this:
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
... 64 more
2022-04-26 12:32:32,129+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 60) WFLYUT0021: Registered web context: '/ovirt-engine/sso' for server 'default-server'
2022-04-26 12:32:32,137+02 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: java.lang.reflect.InvocationTargetException
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'"}}
2022-04-26 12:32:32,158+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name : "ovirt-web-ui.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "apidoc.war" (runtime-name : "apidoc.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "restapi.war" (runtime-name : "restapi.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "engine.ear" (runtime-name : "engine.ear")
2022-04-26 12:32:32,167+02 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183: Service status report
WFLYCTL0186: Services which failed to start: service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
WFLYCTL0448: 2 additional services are down due to their dependencies being missing or failed
2022-04-26 12:32:32,211+02 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
2022-04-26 12:32:32,226+02 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 24.0.1.Final (WildFly Core 16.0.1.Final) started (with errors) in 18363ms - Started 1670 of 1890 services (6 services failed or missing dependencies, 393 services are lazy, passive or on-demand)
2022-04-26 12:32:32,230+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:8706/management
2022-04-26 12:32:32,231+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:8706
Anyone got an idea whats may be the reason? i am a bit lost here.
2 years, 6 months
understanding if real engine offline deployment is possible
by vk@itiviti.com
Hi Everyone,
In my compagny, we try to deploy engine on 2 RHEL8 hosts we already installed.
We don't have direct internet access, so the RHEL8 hosts have been setup using some internal EL repo (using redhat satellite)
We have also duplicated internaly the necessary ovirt repositories, so all ovirt packages can be installed.
Now the blocking part is the deployment of the engine. Is it really possible to deploy an engine without having internet connection?
We tried several time but never succeeded.
I tried with ansible extra var "he_offline_deployment=true", naively thinking it will download necessary packages for the engine through the repositories already configured on the physical hosts (like the physical host act as proxy)
I also tried by specifying the ova file with he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova
both options have also been tried together (--ansible-extra-vars=he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova --ansible-extra-vars=he_offline_deployment=true)
But at the end, it seems the engine deployment process makes the engine to need to reach the ovirt internet repositories, as it always failed with:
2022-04-24 17:39:53,268+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:110 fatal: [localhost -> 192.168.1.154]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "results": []}
FYI the pacific repo works fine when we download packages on physical hosts.
ANother thing to know is that before being able to use our internal repo present on our redhat satellite, a system need to install the satellite crtificate, and register to satellite.
it would be so nice if we can achieve a fully offline engine deploy (which mean no internet access at all, including the engine itself), but we start to lack of clues if it's really possible.
Here are all the ovirt packages installed on the physical hosts:
$rpm -qa | grep ovirt
ovirt-ansible-collection-1.6.5-1.el8.noarch
ovirt-imageio-daemon-2.3.0-1.el8.x86_64
ovirt-host-4.4.9-2.el8.x86_64
ovirt-engine-appliance-4.4-20211020135049.1.el8.x86_64
ovirt-imageio-common-2.3.0-1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.15-1.el8.x86_64
ovirt-host-dependencies-4.4.9-2.el8.x86_64
ovirt-hosted-engine-setup-2.5.4-2.el8.noarch
ovirt-imageio-client-2.3.0-1.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-hosted-engine-ha-2.4.9-1.el8.noarch
ovirt-vmconsole-1.0.9-1.el8.noarch
thanks a lot in advance
2 years, 6 months
ERROR! couldn't resolve module/action 'vdo'
by yp414@163.com
My newly installed version ovirt-node-ng-installer-4.5.0-2022042607 el8
Deploy the run gluster wizard for single node and the error is as follows:
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: ERROR! couldn't resolve module/action 'vdo'. This often indicates a misspelling, missing collection, or incorrect module path.
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: The error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml': line 53, column 3, but may
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: be elsewhere in the file depending on the exact syntax problem.
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: The offending line appears to be:
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: - name: Create VDO with specified size
Apr 30 20:03:12 pm-1 cockpit-ws[6288]: ^ here
2 years, 6 months
hosted engine ovirt-engine-appliance-4.5-20220419162115.1.el8.x86_64 doesn't deploy
by diego.ercolani@ssis.sm
I have a full installation of ovirt hosted-engine but it always stop telling:
[ ERROR ] fatal: [localhost -> 192.168.222.15]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8", "date": "Sat, 30 Apr 2022 06:56:58 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1", "status": 500, "url": "http://localhost/ovirt-engine/services/health"}
and it's true, I can access the engine via the temporary address (192.168.1.69 or 192.168.222.69) and issuing a curl:
[root@ovirt-engine ~]# curl http://localhost/ovirt-engine/services/health
<html><head><title>Error</title></head><body>500 - Internal Server Error</body></html>[root@ovirt-engine ~]#
it give back an Internal server error.
I can not deploy how it's possible to fix?
currently the engine is running.. should I try something from inside the engine?
Thank you
2 years, 6 months
HTTP Error 500: Internal Server Error
by yp414@163.com
Deploy ovirt-engine-appliance-4.5-20220419162115.1 el8. x86_ 64. The error is reported as follows:
INFO ] TASK [ovirt.ovirt.engine_setup : Make sure `ovirt-engine` service is running]
[ INFO ] ok: [localhost -> 192.168.222.237]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Run engine-config]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Restart engine after engine-config]
[ INFO ] skipping: [localhost]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Check if Engine health page is up]
[ ERROR ] fatal: [localhost -> 192.168.222.237]: FAILED! => {"attempts": 30, "changed": false, "connection": "close", "content_encoding": "identity", "content_length": "86", "content_type": "text/html; charset=UTF-8", "date": "Sat, 30 Apr 2022 05:26:11 GMT", "elapsed": 0, "msg": "Status code was 500 and not [200]: HTTP Error 500: Internal Server Error", "redirected": false, "server": "Apache/2.4.37 (centos) OpenSSL/1.1.1k mod_auth_gssapi/1.6.1", "status": 500, "url": "http://localhost/ovirt-engine/services/health"}
[ INFO ] TASK [ovirt.ovirt.engine_setup : Clean temporary files]
[ INFO ] changed: [localhost -> 192.168.222.237]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Sync on engine machine]
[ INFO ] changed: [localhost -> 192.168.222.237]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set destination directory path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create destination directory]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Find the local appliance image]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Set local_vm_disk_path]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Give the vm time to flush dirty buffers]
[ INFO ] ok: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Copy engine logs]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Change ownership of copied engine logs]
[ INFO ] changed: [localhost -> localhost]
[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Notify the user about a failure]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "There was a failure deploying the engine on the local engine VM. The system may not be provisioned according to the playbook results: please check the logs for the issue, fix accordingly or re-deploy from scratch.\n"}
2 years, 6 months
Cannot start ovirt-engine java server - 500 Internal Server Error - after update to 4.4.10
by Jan Burian
Hi there,
I updated hosted ovirt engine from within VM.
At first I updated OS - yum update.
Then rebooted VM.
After VM booted again, I found out I cannot access GUI.
Then I found out I had to run engine-setup even when updating between
releases.
So then I put cluster in global maintenance mode.
And run engine-setup.
The setup finished successfully.
But the GUI is still not accessible. I got 500 Internal server error.
Even after reboot or if I run engine-setup again. The errors are the same.
In the engine.log is no error.
There are errors in server.log.
I updated from ovirt-engine-4.4.9.5-1.el8.noarch
to ovirt-engine-4.4.10.7-1.el8.noarch.
I tried to find some solution, but no luck.
The part of error about failing to start engine.ear is common.
But the java Caused by: error looks like different then I was able to find.
Anyone who is able to find out where the problem lies and any tips for
possible solution?
Thanks.
Regards
Honza
The errors from server.log:
----------------------------------------------------------------------
2022-04-29 17:17:44,924+02 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 53) WFLYUT0021: Registered web context:
'/ovirt-engine/webadmin' for server 'default-server'
2022-04-29 17:17:45,110+02 INFO
[org.springframework.jdbc.support.SQLErrorCodesFactory] (ServerService
Thread Pool -- 54) SQLErrorCodes loaded: [DB2, Derby, H2, HSQL,
Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase, Hana]
2022-04-29 17:17:45,187+02 ERROR [org.jboss.msc.service.fail]
(ServerService Thread Pool -- 54) MSC000001: Failed to start service
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START:
org.jboss.msc.service.StartException in service
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START:
java.lang.IllegalStateException: WFLYEE0042: Failed to construct
component instance
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:57)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1990)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1486)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.base/java.lang.Thread.run(Thread.java:829)
at
org.jboss.threads@2.4.0.Final//org.jboss.threads.JBossThread.run(JBossThread.java:513)
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to
construct component instance
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:163)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:134)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.getComponentInstance(SingletonComponent.java:127)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.SingletonComponent.start(SingletonComponent.java:141)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentStartService$1.run(ComponentStartService.java:54)
... 8 more
Caused by: javax.ejb.EJBException:
org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke
protected void org.ovirt.engine.core.bll.TagsDirector.init() on
org.ovirt.engine.core.bll.TagsDirector@3d922f2d
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:239)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:446)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.LifecycleCMTTxInterceptor.processInvocation(LifecycleCMTTxInterceptor.java:70)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionContextInterceptor.processInvocation(WeldInjectionContextInterceptor.java:43)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:60)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.component.singleton.StartupCountDownInterceptor.processInvocation(StartupCountDownInterceptor.java:25)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.BasicComponent.constructComponentInstance(BasicComponent.java:161)
... 13 more
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable
to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init()
on org.ovirt.engine.core.bll.TagsDirector@3d922f2d
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:85)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.postConstruct(DefaultLifecycleCallbackInvoker.java:66)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.BasicInjectionTarget.postConstruct(BasicInjectionTarget.java:122)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ManagedBean.create(ManagedBean.java:174)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.contexts.AbstractContext.get(AbstractContext.java:96)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstanceStrategy$DefaultContextualInstanceStrategy.get(ContextualInstanceStrategy.java:100)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstanceStrategy$ApplicationScopedContextualInstanceStrategy.get(ContextualInstanceStrategy.java:140)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.bean.ContextualInstance.get(ContextualInstance.java:50)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.manager.BeanManagerImpl.getReference(BeanManagerImpl.java:694)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.manager.BeanManagerImpl.getInjectableReference(BeanManagerImpl.java:794)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.FieldInjectionPoint.inject(FieldInjectionPoint.java:92)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.util.Beans.injectBoundFields(Beans.java:336)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.util.Beans.injectFieldsAndInitializers(Beans.java:347)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultInjector$1.proceed(DefaultInjector.java:71)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.InjectionContextImpl.run(InjectionContextImpl.java:48)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultInjector.inject(DefaultInjector.java:73)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.module.ejb.DynamicInjectionPointInjector.inject(DynamicInjectionPointInjector.java:61)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.module.ejb.SessionBeanInjectionTarget.inject(SessionBeanInjectionTarget.java:138)
at
org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionContext.inject(WeldInjectionContext.java:39)
at
org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInjectionInterceptor.processInvocation(WeldInjectionInterceptor.java:51)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.AroundConstructInterceptorFactory$1.processInvocation(AroundConstructInterceptorFactory.java:28)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.injection.WeldInterceptorInjectionInterceptor.processInvocation(WeldInterceptorInjectionInterceptor.java:56)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.ComponentInstantiatorInterceptor.processInvocation(ComponentInstantiatorInterceptor.java:74)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.weld@23.0.2.Final//org.jboss.as.weld.interceptors.Jsr299BindingsCreateInterceptor.processInvocation(Jsr299BindingsCreateInterceptor.java:111)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ee@23.0.2.Final//org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
at
org.jboss.invocation@1.6.0.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at
org.jboss.as.ejb3@23.0.2.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:232)
... 28 more
Caused by: java.lang.reflect.InvocationTargetException
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at
org.jboss.weld.core@3.1.6.Final//org.jboss.weld.injection.producer.DefaultLifecycleCallbackInvoker.invokeMethods(DefaultLifecycleCallbackInvoker.java:83)
... 59 more
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException:
Unable to determine the correct call signature - no
procedure/function/signature for 'gettagsbyparent_id'
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.processProcedureColumns(GenericCallMetaDataProvider.java:362)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.GenericCallMetaDataProvider.initializeWithProcedureColumnMetaData(GenericCallMetaDataProvider.java:114)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.lambda$createMetaDataProvider$0(CallMetaDataProviderFactory.java:127)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
at
org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
at
org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
at
org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
at
org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
at
org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
at
org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
at
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
... 64 more
2022-04-29 17:17:47,922+02 INFO [org.wildfly.extension.undertow]
(ServerService Thread Pool -- 56) WFLYUT0021: Registered web context:
'/ovirt-engine/sso' for server 'default-server'
2022-04-29 17:17:47,929+02 ERROR
[org.jboss.as.controller.management-operation] (Controller Boot Thread)
WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" =>
"engine.ear")]) - failure description: {"WFLYCTL0080: Failed services"
=>
{"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START"
=> "java.lang.IllegalStateException: WFLYEE0042: Failed to construct
component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to
construct component instance
Caused by: javax.ejb.EJBException:
org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke
protected void org.ovirt.engine.core.bll.TagsDirector.init() on
org.ovirt.engine.core.bll.TagsDirector@3d922f2d
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049:
Unable to invoke protected void
org.ovirt.engine.core.bll.TagsDirector.init() on
org.ovirt.engine.core.bll.TagsDirector@3d922f2d
Caused by: java.lang.reflect.InvocationTargetException
Caused by:
org.springframework.dao.InvalidDataAccessApiUsageException: Unable to
determine the correct call signature - no procedure/function/signature
for 'gettagsbyparent_id'"}}
2022-04-29 17:17:47,948+02 INFO [org.jboss.as.server] (ServerService
Thread Pool -- 25) WFLYSRV0010: Deployed "ovirt-web-ui.war"
(runtime-name : "ovirt-web-ui.war")
2022-04-29 17:17:48,020+02 INFO [org.jboss.as.server] (ServerService
Thread Pool -- 25) WFLYSRV0010: Deployed "apidoc.war" (runtime-name :
"apidoc.war")
2022-04-29 17:17:48,020+02 INFO [org.jboss.as.server] (ServerService
Thread Pool -- 25) WFLYSRV0010: Deployed "restapi.war" (runtime-name :
"restapi.war")
2022-04-29 17:17:48,020+02 INFO [org.jboss.as.server] (ServerService
Thread Pool -- 25) WFLYSRV0010: Deployed "engine.ear" (runtime-name :
"engine.ear")
2022-04-29 17:17:48,031+02 INFO [org.jboss.as.controller] (Controller
Boot Thread) WFLYCTL0183: Service status report
WFLYCTL0186: Services which failed to start: service
jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START:
java.lang.IllegalStateException: WFLYEE0042: Failed to construct
component instance
WFLYCTL0448: 2 additional services are down due to their dependencies
being missing or failed
2022-04-29 17:17:48,076+02 INFO [org.jboss.as.server] (Controller Boot
Thread) WFLYSRV0212: Resuming server
2022-04-29 17:17:48,081+02 ERROR [org.jboss.as] (Controller Boot Thread)
WFLYSRV0026: WildFly Full 23.0.2.Final (WildFly Core 15.0.1.Final)
started (with errors) in 15315ms - Started 1668 of 1888 services (6
services failed or missing dependencies, 393 services are lazy, passive
or on-demand)
2022-04-29 17:17:48,083+02 INFO [org.jboss.as] (Controller Boot Thread)
WFLYSRV0060: Http management interface listening on
http://127.0.0.1:8706/management
2022-04-29 17:17:48,083+02 INFO [org.jboss.as] (Controller Boot Thread)
WFLYSRV0051: Admin console listening on http://127.0.0.1:8706
----------------------------------------------------------------------
2 years, 6 months
Remove ovirt 4.4 repos after upgrade on the engine?
by David White
Hello,
After an update from 4.4 to 4.5 on the Engine, I noticed that the 4.4 repos still exist:
Is it safe to run "yum autoremove" on the Engine, followed by removing all of the 4.4 repositories from /etc/yum.repos.d/ ?
ovirt-4.4 Latest oVirt 4.4 Releaseovirt-4.4-centos-advanced-virtualization CentOS-8 - Advanced Virtualizationovirt-4.4-centos-ceph-pacific CentOS-8-stream - Ceph Pacificovirt-4.4-centos-gluster8 CentOS-8-stream - Gluster 8ovirt-4.4-centos-nfv-openvswitch CentOS-8 - NFV OpenvSwitchovirt-4.4-centos-openstack-victoria CentOS-8 - OpenStack victoriaovirt-4.4-centos-opstools CentOS-8 - OpsTools - collectdovirt-4.4-centos-opstools-vault CentOS-8 - OpsTools - collectd - vaultovirt-4.4-centos-ovirt44 CentOS-8 - oVirt 4.4ovirt-4.4-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo for gluster-ansible owned by sacovirt-4.4-copr:copr.fedorainfracloud.org:sbonazzo:EL8_collection Copr repo for EL8_collection owned by sbonazzoovirt-4.4-epel Extra Packages for Enterprise Linux 8 - x86_64ovirt-4.4-virtio-win-latest virtio-win builds roughly matching what will be shipped in upcoming RHEL
Sent with ProtonMail secure email.
2 years, 6 months