Issue upgrading 4.4 to 4.5 Gluster HCG
by Abe E
Hey All,
I am having an issue upgrading from 4.4 to 4.5.
My setup
3 Node Gluster (Cluster 1) + 3 Node Cluster (Cluster 2)
If i recall the process correctly, the process I did last week:
On all my Nodes:
dnf install -y centos-release-ovirt45 --enablerepo=extras
On Ovirt Engine:
dnf install -y centos-release-ovirt45
dnf update -y --nobest
engine-setup
Once the engine was upgraded successfully I ran the upgrade from the GUI on the Cluster 2 Nodes one by one although when they came back, they complained of "Host failed to attach one of the Storage Domains attached to it." which is the "hosted_storage", "data" (gluster).
I thought maybe its due to the fact that 4.5 brings an update to the glusterfs version, so I decided to upgrade Node 3 in my Gluster Cluster and it booted to emergency mode after the install "succeeded".
I feel like I did something wrong, aside from my bravery of upgrading so much before realizing somethings not right.
My VDSM Logs from one of the nodes that fails to connect to storage (FYI I have 2 Networks, one for Mgmt and 1 for storage that are up):
[root@ovirt-4 ~]# tail -f /var/log/vdsm/vdsm.log
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=8370855e-dea6-4168-870a-d6235d9044e9 (api:54)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:48)
2022-04-25 22:41:31,584-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=14eb199a-7fbf-4638-a6bf-a384dfbb9d2c (api:54)
2022-04-25 22:41:31,602-0600 INFO (periodic/1) [vdsm.api] START repoStats(domains=()) from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:48)
2022-04-25 22:41:31,603-0600 INFO (periodic/1) [vdsm.api] FINISH repoStats return={} from=internal, task_id=08a5c00b-1f66-493f-a408-d4006ddaa959 (api:54)
2022-04-25 22:41:31,606-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:35,393-0600 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:39,366-0600 INFO (jsonrpc/2) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:46,530-0600 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:48)
2022-04-25 22:41:46,568-0600 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=30404767-9761-4f8c-884a-5561dd0d82fe (api:54)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:48)
2022-04-25 22:41:46,569-0600 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=8dbfa47f-e1b7-408c-a060-8d45012f0b90 (api:54)
2022-04-25 22:41:46,574-0600 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:46,651-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:48)
2022-04-25 22:41:46,652-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=92c69020-d0b1-4813-8610-3f3e1892c20b (api:54)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:50,397-0600 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:52,533-0600 INFO (jsonrpc/4) [api.host] START getCapabilities() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::1,53634 (api:48)
2022-04-25 22:41:54,382-0600 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,53634 (api:54)
2022-04-25 22:41:55,037-0600 INFO (jsonrpc/4) [root] /usr/libexec/vdsm/hooks/after_get_caps/ovirt_provider_ovn_hook: rc=0 err=b'' (hooks:123)
2022-04-25 22:41:55,039-0600 INFO (jsonrpc/4) [api.host] FINISH getCapabilities return={'status': {'code': 0, 'message': 'Done'}, 'info': {'kvmEnabled': 'true', 'cpuCores': '6', 'cpuThreads': '12', 'cpuSockets': '1', 'onlineCpus': '0,1,2,3,4,5,6,7,8,9,10,11', 'cpuTopology': [{'cpu_id': 0, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 1, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 2, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 3, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 4, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 5, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}, {'cpu_id': 6, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 0}, {'cpu_id': 7, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 1}, {'cpu_id': 8, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 2}, {'cpu_id': 9, 'numa_cell_id'
: 0, 'socket_id': 0, 'die_id': 0, 'core_id': 3}, {'cpu_id': 10, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 4}, {'cpu_id': 11, 'numa_cell_id': 0, 'socket_id': 0, 'die_id': 0, 'core_id': 5}], 'cpuSpeed': '2500.000', 'cpuModel': 'Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz', 'cpuFlags': 'pdcm,xsaveopt,dtes64,xtpr,clflush,de,ibpb,popcnt,cpuid,ida,monitor,amd-stibp,x2apic,lm,arat,pse36,tsc_deadline_timer,fxsr,ht,skip-l1dfl-vmentry,est,pcid,aperfmperf,nopl,apic,mce,xsave,ibrs,flush_l1d,dtherm,dts,flexpriority,pse,pdpe1gb,pni,sse2,pge,cx16,pschange-mc-no,bts,rdtscp,dca,avx,hypervisor,tsc,tsc_adjust,nx,mmx,pebs,ss,umip,xtopology,vnmi,arch-capabilities,pae,pclmulqdq,tm,aes,invtsc,md_clear,ssse3,amd-ssbd,ssbd,sse4_1,smx,rep_good,vmx,cx8,sse,arch_perfmon,msr,stibp,nonstop_tsc,pti,ds_cpl,mca,cmov,md-clear,fpu,lahf_lm,tm2,sep,tpr_shadow,constant_tsc,pbe,pat,syscall,sse4_2,pln,acpi,mtrr,pts,vme,ept,vpid,spec_ctrl,model_pentium,model_Nehalem,model_486,model_SandyBridge,model_pentium2,
model_Opteron_G1,model_Nehalem-IBRS,model_qemu32,model_kvm32,model_coreduo,model_Westmere,model_SandyBridge-IBRS,model_Westmere-IBRS,model_Penryn,model_pentium3,model_qemu64,model_Conroe,model_kvm64,model_core2duo,model_Opteron_G2', 'vdsmToCpusAffinity': [1], 'version_name': 'Snow Man', 'software_version': '4.50.0.13', 'software_revision': '1', 'supportedENGINEs': ['4.2', '4.3', '4.4', '4.5'], 'clusterLevels': ['4.2', '4.3', '4.4', '4.5', '4.6', '4.7'], 'networks': {'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'iface': 'ovirtmgmt', 'bridged': True, 'addr': '172.17.117.74', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'ipv4defaultroute': True, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0', 'dhcpv4': False, 'dhcpv6': True}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'iface': 'LabNet-v106', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6ad
drs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.106', 'vlanid': 106, 'dhcpv4': False, 'dhcpv6': False}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'iface': 'PIP_V991', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.991', 'vlanid': 991, 'dhcpv4': False, 'dhcpv6': False}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'iface': 'NetEng-V3101', 'bridged': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.3101', 'vlanid': 3101, 'dhcpv4': False, 'dhcpv6': False}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'iface': 'OVIRT-VMs', 'bridged
': True, 'addr': '', 'netmask': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond0.177', 'vlanid': 177, 'dhcpv4': False, 'dhcpv6': False}, 'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'iface': 'Gluster_Net', 'bridged': True, 'addr': '172.17.181.13', 'netmask': '255.255.255.0', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'ipv4defaultroute': False, 'mtu': '1500', 'switch': 'legacy', 'southbound': 'bond1', 'dhcpv4': False, 'dhcpv6': False}}, 'bondings': {'bond0': {'hwaddr': 'c8:1f:66:f6:e5:48', 'slaves': ['eno1', 'eno2'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'm
tu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}, 'bond1': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'slaves': ['eno4', 'eno3'], 'active_slave': '', 'opts': {'mode': '4', 'xmit_hash_policy': '2'}, 'ad_aggregator_id': '1', 'ad_partner_mac': '4c:4e:35:26:2c:00', 'switch': 'legacy', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'speed': 2000}}, 'bridges': {'Gluster_Net': {'ports': ['bond1'], 'stp': 'off', 'addr': '172.17.181.13', 'ipv4addrs': ['172.17.181.13/24'], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '172.17.181.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_ip
tables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e54a', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e54a', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_
stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'ovirtmgmt': {'ports': ['bond0'], 'stp': 'off', 'addr': '172.17.117.74', 'ipv4addrs': ['172.17.117.74/24'], 'ipv6addrs': [], 'ipv6autoconf': True, 'gateway': '172.17.117.1', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4defaultroute': True, 'dhcpv4': False, 'dhcpv6': True, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '148', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_r
esponse_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'LabNet-v106': {'ports': ['bond0.106'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '1
8247', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_i
nterval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'NetEng-V3101': {'ports': ['bond0.3101'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '6208', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0'
, 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'PIP_V991': {'ports': ['bond0.991'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0
', 'gc_timer': '15686', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multi
cast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}, 'OVIRT-VMs': {'ports': ['bond0.177'], 'stp': 'off', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False, 'opts': {'ageing_time': '30000', 'multicast_query_use_ifaddr': '0', 'gc_timer': '20761', 'hello_time': '200', 'multicast_router': '1', 'nf_call_iptables': '0', 'group_addr': '01:80:c2:00:00:00', 'group_fwd_mask': '0x0', 'multicast_querier': '0', 'hash_max': '4096', 'tcn_timer': '0', 'vlan_protocol': '0x8100', 'vlan_stats_per_port': '0', 'root_port': '0', 'vlan_filtering': '0', 'multicast_query_interval': '12500', 'bridge_id': '8000.c81f66f6e548', 'max_age': '2000', 'nf_call_arptables': '0', 'multicast_startup_query_interval': '3125', 'multicast_stats
_enabled': '0', 'multicast_query_response_interval': '1000', 'topology_change': '0', 'priority': '32768', 'multicast_mld_version': '1', 'hash_elasticity': '16', 'hello_timer': '0', 'default_pvid': '1', 'root_path_cost': '0', 'multicast_igmp_version': '2', 'stp_state': '0', 'multicast_startup_query_count': '2', 'topology_change_detected': '0', 'multicast_last_member_interval': '100', 'topology_change_timer': '0', 'root_id': '8000.c81f66f6e548', 'no_linklocal_learn': '0', 'forward_delay': '1500', 'multicast_membership_interval': '26000', 'multicast_querier_interval': '25500', 'vlan_stats_enabled': '0', 'multicast_snooping': '1', 'nf_call_ip6tables': '0', 'multicast_last_member_count': '2'}}}, 'nics': {'eno3': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4a', 'dhcpv4': False, 'dhcpv6':
False, 'speed': 1000}, 'eno4': {'hwaddr': 'c8:1f:66:f6:e5:4a', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:4b', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno1': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:48', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}, 'eno2': {'hwaddr': 'c8:1f:66:f6:e5:48', 'ad_aggregator_id': '1', 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'permhwaddr': 'c8:1f:66:f6:e5:49', 'dhcpv4': False, 'dhcpv6': False, 'speed': 1000}}, 'vlans': {'bond0.106': {'iface': 'bo
nd0', 'vlanid': 106, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.991': {'iface': 'bond0', 'vlanid': 991, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.177': {'iface': 'bond0', 'vlanid': 177, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}, 'bond0.3101': {'iface': 'bond0', 'vlanid': 3101, 'addr': '', 'ipv4addrs': [], 'ipv6addrs': [], 'ipv6autoconf': False, 'gateway': '', 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ipv4defaultroute': False, 'dhcpv4': False, 'dhcpv6': False}}, 'nameservers': ['8.8.8.8'], 'supportsIPv6':
True, 'ovnConfigured': False, 'hooks': {'before_vm_start': {'50_hostedengine': {'checksum': 'e5f5262cf22e06cd34e227afb27647e479351266876019a64210dbcbd2a43830'}}, 'after_get_caps': {'ovirt_provider_ovn_hook': {'checksum': 'a2bdefca38b96c8ddab39822cc8282bf3f67d875c4879003ffc9661826c92421'}}, 'before_device_create': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}, 'before_nic_hotplug': {'10_ovirt_provider_ovn_hook': {'checksum': 'b9d67afb41bd86a521ada2121e3505297b3b6dfd478275ce0bd9501fcda6dafc'}}}, 'operatingSystem': {'release': '1.el8', 'version': '8.6.2203.0', 'name': 'RHEL', 'pretty_name': 'oVirt Node 4.5.0'}, 'uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'packages2': {'kernel': {'version': '4.18.0', 'release': '373.el8.x86_64'}, 'glusterfs-cli': {'version': '10.1', 'release': '1.el8s'}, 'librbd1': {'version': '16.2.7', 'release': '1.el8s'}, 'libvirt': {'version': '8.0.0', 'release': '2.module_el8.6.0+1087+b42c8331'}
, 'mom': {'version': '0.6.2', 'release': '1.el8'}, 'ovirt-hosted-engine-ha': {'version': '2.5.0', 'release': '1.el8'}, 'openvswitch': {'version': '2.15', 'release': '3.el8'}, 'nmstate': {'version': '1.2.1', 'release': '1.el8'}, 'qemu-img': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'qemu-kvm': {'version': '6.2.0', 'release': '5.module_el8.6.0+1087+b42c8331'}, 'spice-server': {'version': '0.14.3', 'release': '4.el8'}, 'vdsm': {'version': '4.50.0.13', 'release': '1.el8'}, 'glusterfs': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-fuse': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-geo-replication': {'version': '10.1', 'release': '1.el8s'}, 'glusterfs-server': {'version': '10.1', 'release': '1.el8s'}}, 'realtimeKernel': False, 'kernelArgs': 'BOOT_IMAGE=(hd0,msdos1)//ovirt-node-ng-4.5.0-0.20220420.0+1/vmlinuz-4.18.0-373.el8.x86_64 crashkernel=auto resume=/dev/mapper/onn-swap rd.lvm.lv=onn/ovirt-node-ng-4.5.0-0.20220420.0+1 rd.lvm.lv=onn/swap rhgb q
uiet kvm-intel.nested=1 root=/dev/onn/ovirt-node-ng-4.5.0-0.20220420.0+1 boot=UUID=adb2035d-5047-471d-8b51-206e0afb39f4 rootflags=discard img.bootid=ovirt-node-ng-4.5.0-0.20220420.0+1', 'nestedVirtualization': True, 'emulatedMachines': ['pc-q35-rhel8.6.0', 'pc-i440fx-rhel7.1.0', 'pc-q35-rhel8.2.0', 'pc-q35-rhel7.6.0', 'pc-i440fx-rhel7.3.0', 'pc-i440fx-rhel7.6.0', 'pc-q35-rhel8.5.0', 'pc-q35-rhel8.0.0', 'pc-i440fx-rhel7.2.0', 'pc', 'pc-q35-rhel7.3.0', 'pc-i440fx-rhel7.4.0', 'q35', 'pc-i440fx-2.11', 'pc-q35-rhel7.4.0', 'pc-i440fx-rhel7.5.0', 'pc-i440fx-rhel7.0.0', 'pc-q35-rhel7.5.0', 'pc-i440fx-4.2', 'pc-q35-rhel8.3.0', 'pc-q35-rhel8.1.0', 'pc-q35-rhel8.4.0'], 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:ea2a5da196cb'}], 'FC': []}, 'vmTypes': ['kvm'], 'memSize': '43996', 'reservedMem': '321', 'guestOverhead': '65', 'rngSources': ['random', 'hwrng'], 'numaNodes': {'0': {'totalMemory': '43996', 'hugepag
es': {'4': {'totalPages': '11263209'}, '2048': {'totalPages': '0'}, '1048576': {'totalPages': '0'}}, 'cpus': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]}}, 'numaNodeDistance': {'0': [10]}, 'autoNumaBalancing': 2, 'selinux': {'mode': '1'}, 'liveSnapshot': 'true', 'liveMerge': 'true', 'kdumpStatus': 0, 'deferred_preallocation': True, 'hostdevPassthrough': 'false', 'additionalFeatures': ['libgfapi_supported', 'GLUSTER_SNAPSHOT', 'GLUSTER_GEO_REPLICATION', 'GLUSTER_BRICK_MANAGEMENT'], 'hostedEngineDeployed': False, 'hugepages': [2048, 1048576], 'kernelFeatures': {'SPECTRE_V2': '(Mitigation: Full generic retpoline, IBPB: conditional, IBRS_FW, STIBP: conditional, RSB filling)', 'ITLB_MULTIHIT': '(KVM: Mitigation: VMX disabled)', 'MDS': '(Mitigation: Clear CPU buffers; SMT vulnerable)', 'L1TF': '(Mitigation: PTE Inversion; VMX: conditional cache flushes, SMT vulnerable)', 'SPEC_STORE_BYPASS': '(Mitigation: Speculative Store Bypass disabled via prctl and seccomp)', 'TSX_ASYNC_ABORT': '(Not affec
ted)', 'SPECTRE_V1': '(Mitigation: usercopy/swapgs barriers and __user pointer sanitization)', 'SRBDS': '(Not affected)', 'MELTDOWN': '(Mitigation: PTI)'}, 'vncEncrypted': True, 'backupEnabled': True, 'coldBackupEnabled': True, 'clearBitmapsEnabled': True, 'fipsEnabled': False, 'boot_uuid': 'adb2035d-5047-471d-8b51-206e0afb39f4', 'tscFrequency': '1999999000', 'tscScaling': False, 'connector_info': {'platform': 'x86_64', 'os_type': 'linux', 'ip': None, 'host': 'ovirt-4.[removed].com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ea2a5da196cb', 'do_local_attach': False, 'uuid': '215601b1-e536-4258-ad35-d1f869afa0f8', 'system uuid': '4c4c4544-0053-5810-8052-c8c04f563132', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:00000000-0000-0000-0000-000000000000', 'found_dsc': ''}, 'domain_versions': [0, 2, 3, 4, 5], 'supported_block_size': {'FCP': [512], 'GLUSTERFS': [0, 512, 4096], 'ISCSI': [512], 'LOCALFS': [0, 512, 4096], 'NFS': [512], 'POSIXFS': [512]}, 'cd_change_pdiv': True, 'refres
h_disk_supported': True, 'replicate_extend': True, 'measure_subchain': True, 'measure_active': True, 'mailbox_events': True, 'netConfigDirty': 'False', 'openstack_binding_host_ids': {'OVIRT_PROVIDER_OVN': 'eaa82268-bd08-453f-9953-b4aad4c4c307'}, 'lastClientIface': 'ovirtmgmt'}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,046-0600 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getCapabilities took more than 1.00 seconds to succeed: 2.51 (__init__:316)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] START getHardwareInfo() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:41:55,058-0600 INFO (jsonrpc/7) [api.host] FINISH getHardwareInfo return={'status': {'code': 0, 'message': 'Done'}, 'info': {'systemManufacturer': ' ', 'systemProductName': ' ', 'systemVersion': '', 'systemSerialNumber': 'HSXRV12', 'systemUUID': '4C4C4544-0053-5810-8052-C8C04F563132', 'systemFamily': ''}} from=::ffff:172.17.117.80,38712 (api:54)
2022-04-25 22:41:55,121-0600 INFO (jsonrpc/3) [api.host] START getStats() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:48)
2022-04-25 22:41:55,160-0600 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=de30e54a-21c6-46ff-a669-230145f93ae2 (api:54)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:48)
2022-04-25 22:41:55,161-0600 INFO (jsonrpc/3) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d, task_id=b751b6d0-bc99-457b-9620-4ffaaaa37c91 (api:54)
2022-04-25 22:41:55,166-0600 INFO (jsonrpc/3) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712, flow_id=610b2f2d (api:54)
2022-04-25 22:41:55,346-0600 INFO (jsonrpc/5) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/engine', 'ipv6_enabled': 'false', 'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:48)
2022-04-25 22:41:55,478-0600 ERROR (jsonrpc/5) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>engine</name>\n <id>51bb4ddb-dfbc-4376-85cd-d7070e287946</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/engine/engine<name>gluster-1.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArb
iter></brick>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/engine/engine<name>gluster-2.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/engine/engine<name>gluster-3.[removed].com:/gluster_bricks/engine/engine</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n
<option>\n <name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>clu
ster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</n
ame>\n <value>auto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n
<value>on</value>\n </option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,478-0600 INFO (jsonrpc/5) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '98ea63c0-e8c4-4857-8ef7-3cb256d45997', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=6a175be8-fdd0-4d2a-8b79-1866aff21159 (api:54)
2022-04-25 22:41:55,515-0600 INFO (jsonrpc/2) [vdsm.api] START connectStorageServer(domType=7, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'vfs_type': 'glusterfs', 'port': '', 'mnt_options': 'backup-volfile-servers=gluster-2.[removed].com:gluster-3.[removed].com', 'iqn': '', 'connection': 'gluster-1.[removed].com:/data', 'ipv6_enabled': 'false', 'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:48)
2022-04-25 22:41:55,647-0600 ERROR (jsonrpc/2) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\n <volume>\n <name>data</name>\n <id>06ce0d34-b4b4-472c-9cec-24ffe934ed05</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <snapshotCount>0</snapshotCount>\n <brickCount>3</brickCount>\n <distCount>1</distCount>\n <replicaCount>3</replicaCount>\n <arbiterCount>0</arbiterCount>\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>2</type>\n <typeStr>Replicate</typeStr>\n <transport>0</transport>\n <bricks>\n <brick uuid="1679e76e-938c-4da4-b7f7-5161c5badcd3">gluster-1.[removed].com:/gluster_bricks/data/data<name>gluster-1.[removed].com:/gluster_bricks/data/data</name><hostUuid>1679e76e-938c-4da4-b7f7-5161c5badcd3</hostUuid><isArbiter>0</isArbiter></bri
ck>\n <brick uuid="530faa98-b564-45d5-8495-8e4006a7628c">gluster-2.[removed].com:/gluster_bricks/data/data<name>gluster-2.[removed].com:/gluster_bricks/data/data</name><hostUuid>530faa98-b564-45d5-8495-8e4006a7628c</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="06c45945-8a79-476b-9a02-483113191e69">gluster-3.[removed].com:/gluster_bricks/data/data<name>gluster-3.[removed].com:/gluster_bricks/data/data</name><hostUuid>06c45945-8a79-476b-9a02-483113191e69</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>31</optCount>\n <options>\n <option>\n <name>cluster.granular-entry-heal</name>\n <value>enable</value>\n </option>\n <option>\n <name>storage.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <
name>cluster.lookup-optimize</name>\n <value>off</value>\n </option>\n <option>\n <name>server.keepalive-count</name>\n <value>5</value>\n </option>\n <option>\n <name>server.keepalive-interval</name>\n <value>2</value>\n </option>\n <option>\n <name>server.keepalive-time</name>\n <value>10</value>\n </option>\n <option>\n <name>server.tcp-user-timeout</name>\n <value>20</value>\n </option>\n <option>\n <name>network.ping-timeout</name>\n <value>30</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>client.event-threads</name>\n <value>4</value>\n </option>\n <option>\n <name>cluster.choose-local</name>\n
<value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n <option>\n <name>features.shard</name>\n <value>on</value>\n </option>\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>cluster.shd-max-threads</name>\n <value>8</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n <value>full</value>\n </option>\n <option>\n <name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>a
uto</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable</value>\n </option>\n <option>\n <name>performance.strict-o-direct</name>\n <value>on</value>\n </option>\n <option>\n <name>network.remote-dio</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>transport.address-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,647-0600 INFO (jsonrpc/2) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '170e1dda-af02-4ff3-808b-16dc814e750a', 'status': 4106}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=9d0ded6d-e19e-41f8-85b4-52b18bd3380d (api:54)
2022-04-25 22:41:55,682-0600 INFO (jsonrpc/1) [vdsm.api] START connectStorageServer(domType=1, spUUID='3115b136-9981-11ec-90ae-00163e6f31f1', conList=[{'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-2.[removed].com:/nfs2/data', 'ipv6_enabled': 'false', 'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'user': '', 'tpgt': '1'}, {'password': '********', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 'ovirt-1.[removed].com:/nfs1/data', 'ipv6_enabled': 'false', 'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'user': '', 'tpgt': '1'}]) from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:48)
2022-04-25 22:41:55,688-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain ddeb67aa-9ec8-488b-9632-5cc19a244815 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Removing domain e1ae9b1a-7aa4-4072-b92e-5e967f5a2ee7 from storage domain cache (sdc:211)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-25 22:41:55,689-0600 INFO (jsonrpc/1) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': '311165ae-bfbf-4f51-994f-051aef56d94f', 'status': 0}, {'id': 'ec52f74c-a041-4e3a-9aae-5f1c6629d77f', 'status': 0}]} from=::ffff:172.17.117.80,38712, flow_id=77e22c1a, task_id=bc135c71-2b43-4627-b3d3-2eb0a4d25227 (api:54)
2022-04-25 22:42:01,696-0600 INFO (periodic/0) [vdsm.api] START repoStats(domains=()) from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:48)
2022-04-25 22:42:01,697-0600 INFO (periodic/0) [vdsm.api] FINISH repoStats return={} from=internal, task_id=9b42900c-cfd0-4e56-b6f8-07a340497993 (api:54)
2022-04-25 22:42:04,539-0600 INFO (jsonrpc/6) [api.host] START getStats() from=::ffff:172.17.117.80,38712 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] START repoStats(domains=()) from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:48)
2022-04-25 22:42:04,577-0600 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats return={} from=::ffff:172.17.117.80,38712, task_id=a97abe37-ce3b-4745-8cfc-89d0d01bfc74 (api:54)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] START multipath_health() from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:48)
2022-04-25 22:42:04,578-0600 INFO (jsonrpc/6) [vdsm.api] FINISH multipath_health return={} from=::ffff:172.17.117.80,38712, task_id=b054a35e-e4ea-4402-963b-5b0416a30692 (api:54)
2022-04-25 22:42:04,583-0600 INFO (jsonrpc/6) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:172.17.117.80,38712 (api:54)
2 years, 6 months
recovery from expired engine.cer certificate
by Jiří Sléžka
Hello,
I am stuck in this situation...
It looks like engine certificate (engine.cer) expired few days ago
[root@ovirt ~]# openssl x509 -in /etc/pki/ovirt-engine/certs/engine.cer
-noout -dates
notBefore=Mar 23 21:34:19 2021 GMT
notAfter=Apr 26 21:34:19 2022 GMT
CA and other certs are still valid
Yesterday I had one host outage and HE restarted on other host. But it
cannot communicate with all hosts due to certificate expiration
lnav /var/log/ovirt-engine/engine.log
...
2022-05-02 11:02:29,127+02 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-43)
[] Unable to RefreshCapabilities: VDSNetworkException:
VDSGenericException: VDSNetworkException: Received fatal alert:
certificate_expired
...
There are vms still running on hosts.
Is there way how to (manualy?) renew engine cert and recover from this
situation?
I have tried run engine-setup (and select renew certificate during install)
[root@ovirt ~]# engine-setup --offline
but it fails with
[ ERROR ] It seems that you are running your engine inside of the
hosted-engine VM and are not in "Global Maintenance" mode.
In that case you should put the system into the "Global
Maintenance" mode before running engine-setup, or the hosted-engine HA
agent might kill the machine, which might corrupt your data.
[ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine
setup detected, but Global Maintenance is not set.
But global maintenance is enabled on host...
[root@ovirt06 ~]# hosted-engine --vm-status
!! Cluster is in GLOBAL MAINTENANCE mode !!
--== Host ovirt05.net.slu.cz (id: 1) status ==--
Host ID : 1
Host timestamp : 38627
Score : 3400
Engine status : {"vm": "down_unexpected", "health":
"bad", "detail": "Down", "reason": "bad vm status"}
Hostname : ovirt05.net.slu.cz
Local maintenance : False
stopped : False
crc32 : b719664d
conf_on_shared_storage : True
local_conf_timestamp : 38627
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=38627 (Mon May 2 10:55:43 2022)
host-id=1
score=3400
vm_conf_refresh_time=38627 (Mon May 2 10:55:43 2022)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
--== Host ovirt06.net.slu.cz (id: 2) status ==--
Host ID : 2
Host timestamp : 8858161
Score : 3400
Engine status : {"vm": "up", "health": "good",
"detail": "Up"}
Hostname : ovirt06.net.slu.cz
Local maintenance : False
stopped : False
crc32 : 414a980b
conf_on_shared_storage : True
local_conf_timestamp : 8858161
Status up-to-date : True
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=8858161 (Mon May 2 10:55:48 2022)
host-id=2
score=3400
vm_conf_refresh_time=8858161 (Mon May 2 10:55:48 2022)
conf_on_shared_storage=True
maintenance=False
state=GlobalMaintenance
stopped=False
!! Cluster is in GLOBAL MAINTENANCE mode !!
relevant lines from ovirt-engine-setup log are
...
2022-05-02 11:08:02,194+0200 DEBUG
otopi.ovirt_engine_setup.engine_common.database database.execute:239
Creating own connection
2022-05-02 11:08:02,233+0200 DEBUG
otopi.ovirt_engine_setup.engine_common.database database.execute:284
Result: [{'vm_guid': '96a6b6a7-75a9-472a-9d4f-1502b415470a',
'run_on_vds': 'e24f0dcc-51f3-4d1a-acf5-2833a9dc584a'}]
2022-05-02 11:08:02,234+0200 DEBUG
otopi.ovirt_engine_setup.engine_common.database database.execute:234
Database: 'None', Statement: '
SELECT vds_id, ha_global_maintenance
FROM vds_statistics
WHERE vds_id = %(VdsId)s;
', args: {'VdsId':
'e24f0dcc-51f3-4d1a-acf5-2833a9dc584a'}
2022-05-02 11:08:02,234+0200 DEBUG
otopi.ovirt_engine_setup.engine_common.database database.execute:239
Creating own connection
2022-05-02 11:08:02,250+0200 DEBUG
otopi.ovirt_engine_setup.engine_common.database database.execute:284
Result: [{'vds_id': 'e24f0dcc-51f3-4d1a-acf5-2833a9dc584a',
'ha_global_maintenance': False}]
2022-05-02 11:08:02,250+0200 ERROR
otopi.plugins.ovirt_engine_common.ovirt_engine.system.he
he._validate:114 It seems that you are running your engine inside of the
hosted-engine VM and are not in "Global Maintenance" mode.
In that case you should put the system into the "Global Maintenance"
mode before running engine-setup, or the hosted-engine HA agent might
kill the machine, which might corrupt your data.
...
Thanks in advance for any advice,
Jiri
2 years, 6 months
Migrating disk image from 4.1 to 4.4 or 4.5
by Mohamed Roushdy
Hello,
Kindly to provide some steps for “How-To” move/migrate a disk image from Ovirt 4.1 to a future release such as 4.4 or 4.5. I need to test how to migrate existing VMs to a new environment.
Thank you,
2 years, 6 months
Re: 4.5.0.1 ovirt hosted-engine deployment fails
by Strahil Nikolov
Obviously ens192 is not the good interface. Check where the IP is added and use that address.
Best Regards,Strahil Nikolov
On Fri, Apr 29, 2022 at 16:48, Mohamed Roushdy<mohamedroushdy(a)peopleintouch.com> wrote: <!--#yiv8236219045 _filtered {} _filtered {}#yiv8236219045 #yiv8236219045 p.yiv8236219045MsoNormal, #yiv8236219045 li.yiv8236219045MsoNormal, #yiv8236219045 div.yiv8236219045MsoNormal {margin:0in;font-size:11.0pt;font-family:"Calibri", sans-serif;}#yiv8236219045 span.yiv8236219045EmailStyle17 {font-family:"Calibri", sans-serif;color:windowtext;}#yiv8236219045 .yiv8236219045MsoChpDefault {font-family:"Calibri", sans-serif;} _filtered {}#yiv8236219045 div.yiv8236219045WordSection1 {}-->
Hello,
I’ve researched a bit about this problem, but none of the proposed solutions fixed it. I’m trying to deploy Ovirt 4.5.0.1 in my lab, and the installation fails with the following error:
I’ve even tried to delete the default network bridge (as suggested in some articles), but this didn’t help either. The node has 3 network interfaces, and the hosts file points only to the management interface.
Thank you,
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N4TY77OAA7Q...
2 years, 6 months
Webadmin and API down with error 500 after updating from 4.4 to 4.5
by lists@pequod.io
Hello,
today i updated from 4.4 to 4.5 and i am no longer able to access my oVirt Cluster. Accessing oVirt Web Interface fails with "500 - Internal Server Error". The API is also dead, My Backup Software and Foreman is no longer able to talk to oVirt.
I rebooted the host an ran engine-setup again, it completed without issues but Engine ist still dead. If i ran it again, it tells me now that my cluster is not in global maintanance mode, but "hosted-engine --vm-status" tells me it is still in maintanance mode.
one suspect thing i found in server.log is this:
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:324)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataProviderFactory.createMetaDataProvider(CallMetaDataProviderFactory.java:70)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.metadata.CallMetaDataContext.initializeMetaData(CallMetaDataContext.java:252)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compileInternal(AbstractJdbcCall.java:313)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.PostgresDbEngineDialect$PostgresSimpleJdbcCall.compileInternal(PostgresDbEngineDialect.java:106)
at org.springframework@5.0.4.RELEASE//org.springframework.jdbc.core.simple.AbstractJdbcCall.compile(AbstractJdbcCall.java:296)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.getCall(SimpleJdbcCallsHandler.java:157)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:134)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeReadList(SimpleJdbcCallsHandler.java:105)
at org.ovirt.engine.core.dal//org.ovirt.engine.core.dao.TagDaoImpl.getAllForParent(TagDaoImpl.java:82)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.addChildren(TagsDirector.java:116)
at deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.TagsDirector.init(TagsDirector.java:75)
... 64 more
2022-04-26 12:32:32,129+02 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 60) WFLYUT0021: Registered web context: '/ovirt-engine/sso' for server 'default-server'
2022-04-26 12:32:32,137+02 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("deploy") failed - address: ([("deployment" => "engine.ear")]) - failure description: {"WFLYCTL0080: Failed services" => {"jboss.deployment.subunit.\"engine.ear\".\"bll.jar\".component.Backend.START" => "java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
Caused by: javax.ejb.EJBException: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: org.jboss.weld.exceptions.WeldException: WELD-000049: Unable to invoke protected void org.ovirt.engine.core.bll.TagsDirector.init() on org.ovirt.engine.core.bll.TagsDirector@21c5d21d
Caused by: java.lang.reflect.InvocationTargetException
Caused by: org.springframework.dao.InvalidDataAccessApiUsageException: Unable to determine the correct call signature - no procedure/function/signature for 'gettagsbyparent_id'"}}
2022-04-26 12:32:32,158+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "ovirt-web-ui.war" (runtime-name : "ovirt-web-ui.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "apidoc.war" (runtime-name : "apidoc.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "restapi.war" (runtime-name : "restapi.war")
2022-04-26 12:32:32,159+02 INFO [org.jboss.as.server] (ServerService Thread Pool -- 27) WFLYSRV0010: Deployed "engine.ear" (runtime-name : "engine.ear")
2022-04-26 12:32:32,167+02 INFO [org.jboss.as.controller] (Controller Boot Thread) WFLYCTL0183: Service status report
WFLYCTL0186: Services which failed to start: service jboss.deployment.subunit."engine.ear"."bll.jar".component.Backend.START: java.lang.IllegalStateException: WFLYEE0042: Failed to construct component instance
WFLYCTL0448: 2 additional services are down due to their dependencies being missing or failed
2022-04-26 12:32:32,211+02 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212: Resuming server
2022-04-26 12:32:32,226+02 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: WildFly Full 24.0.1.Final (WildFly Core 16.0.1.Final) started (with errors) in 18363ms - Started 1670 of 1890 services (6 services failed or missing dependencies, 393 services are lazy, passive or on-demand)
2022-04-26 12:32:32,230+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http management interface listening on http://127.0.0.1:8706/management
2022-04-26 12:32:32,231+02 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console listening on http://127.0.0.1:8706
Anyone got an idea whats may be the reason? i am a bit lost here.
2 years, 6 months
Ovirt engine service fail to start after upgrading from 4.4.9 to 4.4.10
by gantonjo-ovirt@yahoo.com
Good morning.
I have an hosted engine in my VM cluster that fails to start after upgrading from 4.4.9 to 4.4.10.
Looking at the ovirt-enginge boot.log, I can see the following:
06:19:46,860 ERROR [org.jboss.as.controller.management-operation] WFLYCTL0013: Operation ("add") failed - address: ([("subsystem" => "logging")]): java.lang.IllegalArgumentException: Failed to instantiate class "org.ovirt.engine.core.logger.LoggerHandler" for handler "ovirt-logger"
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.lambda$validate$2(AbstractPropertyConfiguration.java:122)
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.WrappedAction.execute(WrappedAction.java:47)
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.validate(AbstractPropertyConfiguration.java:118)
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.LogContextConfigurationImpl.doPrepare(LogContextConfigurationImpl.java:336)
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.LogContextConfigurationImpl.prepare(LogContextConfigurationImpl.java:289)
at org.jboss.as.logging@16.0.1.Final//org.jboss.as.logging.logmanager.ConfigurationPersistence.prepare(ConfigurationPersistence.java:299)
at org.jboss.as.logging@16.0.1.Final//org.jboss.as.logging.LoggingOperations$CommitOperationStepHandler.lambda$execute$0(LoggingOperations.java:131)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractOperationContext.executeStep(AbstractOperationContext.java:1047)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractOperationContext.processStages(AbstractOperationContext.java:779)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractOperationContext.executeOperation(AbstractOperationContext.java:468)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.OperationContextImpl.executeOperation(OperationContextImpl.java:1428)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.ModelControllerImpl.boot(ModelControllerImpl.java:546)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:515)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractControllerService.boot(AbstractControllerService.java:477)
at org.jboss.as.server@16.0.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:459)
at org.jboss.as.server@16.0.1.Final//org.jboss.as.server.ServerService.boot(ServerService.java:412)
at org.jboss.as.controller@16.0.1.Final//org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:416)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.jboss.logmanager@2.1.18.Final//org.jboss.logmanager.config.AbstractPropertyConfiguration$ConstructAction.lambda$validate$2(AbstractPropertyConfiguration.java:120)
... 17 more
Caused by: org.jboss.modules.ModuleLoadError: org.codehaus.jackson.jackson-core-asl
at org.jboss.modules.Module.addPaths(Module.java:1288)
at org.jboss.modules.Module.link(Module.java:1644)
at org.jboss.modules.Module.getPaths(Module.java:1605)
at org.jboss.modules.Module.getPathsUnchecked(Module.java:1628)
at org.jboss.modules.Module.loadModuleClass(Module.java:748)
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:247)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:410)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116)
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017)
at org.jboss.modules.ModuleClassLoader.doDefineOrLoadClass(ModuleClassLoader.java:423)
at org.jboss.modules.ModuleClassLoader.defineClass(ModuleClassLoader.java:555)
at org.jboss.modules.ModuleClassLoader.loadClassLocal(ModuleClassLoader.java:339)
at org.jboss.modules.ModuleClassLoader$1.loadClassLocal(ModuleClassLoader.java:126)
at org.jboss.modules.Module.loadModuleClass(Module.java:753)
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:247)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:410)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116)
at org.ovirt.engine.core.logger//org.ovirt.engine.core.logger.LoggerHandler.<init>(LoggerHandler.java:20)
... 22 more
Can anyone help me get past this error?
Thanks in advance.
2 years, 6 months
understanding if real engine offline deployment is possible
by vk@itiviti.com
Hi Everyone,
In my compagny, we try to deploy engine on 2 RHEL8 hosts we already installed.
We don't have direct internet access, so the RHEL8 hosts have been setup using some internal EL repo (using redhat satellite)
We have also duplicated internaly the necessary ovirt repositories, so all ovirt packages can be installed.
Now the blocking part is the deployment of the engine. Is it really possible to deploy an engine without having internet connection?
We tried several time but never succeeded.
I tried with ansible extra var "he_offline_deployment=true", naively thinking it will download necessary packages for the engine through the repositories already configured on the physical hosts (like the physical host act as proxy)
I also tried by specifying the ova file with he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova
both options have also been tried together (--ansible-extra-vars=he_appliance_ova=/usr/share/ovirt-engine-appliance/ovirt-engine-appliance-4.4-20211020135049.1.el8.ova --ansible-extra-vars=he_offline_deployment=true)
But at the end, it seems the engine deployment process makes the engine to need to reach the ovirt internet repositories, as it always failed with:
2022-04-24 17:39:53,268+0100 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:110 fatal: [localhost -> 192.168.1.154]: FAILED! => {"changed": false, "msg": "Failed to download metadata for repo 'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried", "rc": 1, "results": []}
FYI the pacific repo works fine when we download packages on physical hosts.
ANother thing to know is that before being able to use our internal repo present on our redhat satellite, a system need to install the satellite crtificate, and register to satellite.
it would be so nice if we can achieve a fully offline engine deploy (which mean no internet access at all, including the engine itself), but we start to lack of clues if it's really possible.
Here are all the ovirt packages installed on the physical hosts:
$rpm -qa | grep ovirt
ovirt-ansible-collection-1.6.5-1.el8.noarch
ovirt-imageio-daemon-2.3.0-1.el8.x86_64
ovirt-host-4.4.9-2.el8.x86_64
ovirt-engine-appliance-4.4-20211020135049.1.el8.x86_64
ovirt-imageio-common-2.3.0-1.el8.x86_64
python3-ovirt-engine-sdk4-4.4.15-1.el8.x86_64
ovirt-host-dependencies-4.4.9-2.el8.x86_64
ovirt-hosted-engine-setup-2.5.4-2.el8.noarch
ovirt-imageio-client-2.3.0-1.el8.x86_64
ovirt-vmconsole-host-1.0.9-1.el8.noarch
ovirt-provider-ovn-driver-1.2.34-1.el8.noarch
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
python3-ovirt-setup-lib-1.3.2-1.el8.noarch
ovirt-hosted-engine-ha-2.4.9-1.el8.noarch
ovirt-vmconsole-1.0.9-1.el8.noarch
thanks a lot in advance
2 years, 6 months