Problem patching & upgrading a RHEL oVirt host
by David White
Hello,I followed some instructions I found in https://www.ovirt.org/documentation/upgrade_guide/ and https://www.ovi... doing the following:
883 subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms 884 subscription-manager repos --enable rhel-8-for-x86_64-appstream-rpms 885 subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms 886 rpm -i --justdb --nodeps --force "http://mirror.centos.org/centos/8-stream/BaseOS/$(rpm --eval '%_arch')/os/Packages/centos-stream-release-8.6-1.el8.noarch.rpm" 887 cat >/etc/yum.repos.d/CentOS-Stream-Extras.repo <<'EOF' 888 [cs8-extras] 889 name=CentOS Stream $releasever - Extras 890 mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=extras&infra=$infra 891 #baseurl=http://mirror.centos.org/$contentdir/8-stream/extras/$basearch/os/ 892 gpgcheck=1 893 enabled=1 894 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-Official 895 EOF 896 cat >/etc/yum.repos.d/CentOS-Stream-Extras-common.repo <<'EOF' 897 [cs8-extras-common] 898 name=CentOS Stream $releasever - Extras common packages 899 mirrorlist=http://mirrorlist.centos.org/?release=8-stream&arch=$basearch&repo=extras-extras-common 900 #baseurl=http://mirror.centos.org/$contentdir/8-stream/extras/$basearch/extras-common/ 901 gpgcheck=1 902 enabled=1 903 gpgkey=https://www.centos.org/keys/RPM-GPG-KEY-CentOS-SIG-Extras 904 EOF 905 echo "8-stream" > /etc/yum/vars/stream 906 dnf distro-sync --nobest 907 reboot 908 dnf install centos-release-ovirt45 909 dnf install centos-release-ovirt45 --enablerepo=extras
But now, yum update isn't working because its trying to install centos-stream-release 8.6.1 over redhat-release-8.6.
Surely I shouldn't install CentOS stream release over RHEL release, should I?
See below:
[root@phys1 dwhite]# cat /etc/redhat-releaseRed Hat Enterprise Linux release 8.5 (Ootpa)[root@phys1 dwhite]# yum updateUpdating Subscription Management repositories.Last metadata expiration check: 0:02:01 ago on Thu 12 May 2022 05:59:38 AM EDT.Error: Problem: installed package centos-stream-release-8.6-1.el8.noarch obsoletes redhat-release < 9 provided by redhat-release-8.6-0.1.el8.x86_64 - cannot install the best update candidate for package redhat-release-8.5-0.8.el8.x86_64 - problem with installed package centos-stream-release-8.6-1.el8.noarch(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Sent with ProtonMail secure email.
2 years, 8 months
Host cannot connect to storage domains
by suporte@logicworks.pt
After upgrade to 4.5 host cannot be activated because cannot connect to data domain.
I have a data domain in NFS (master) and a GlusterFS. It complains about the Gluster domain:
The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
# rpm -qa|grep glusterfs*
glusterfs-10.1-1.el8s.x86_64
glusterfs-selinux-2.0.1-1.el8s.noarch
glusterfs-client-xlators-10.1-1.el8s.x86_64
glusterfs-events-10.1-1.el8s.x86_64
libglusterfs0-10.1-1.el8s.x86_64
glusterfs-fuse-10.1-1.el8s.x86_64
glusterfs-server-10.1-1.el8s.x86_64
glusterfs-cli-10.1-1.el8s.x86_64
glusterfs-geo-replication-10.1-1.el8s.x86_64
engine log:
2022-04-27 13:35:16,118+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: VDS_STORAGES_CONNECTION_FAILED(188), Failed to connect Host NODE1 to the Storage Domains DATA1.
2022-04-27 13:35:16,169+01 ERR OR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [e
be79c6] EVENT_ID: STORAGE_DOMAIN_ ERR OR(996), The error message for connection node1-teste.acloud.pt:/data1 returned by VDSM was: XML error
2022-04-27 13:35:16,170+01 ERR OR [org.ovirt.engine.core.bll.storage.connection.FileStorageHelper] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-66) [ebe79c6
] The connection with details 'node1-teste.acloud.pt:/data1' failed because of error code '4106' and error message is: xml error
vdsm log:
2022-04-27 13:40:07,125+0100 ERROR (jsonrpc/4) [storage.storageServer] Could not connect to storage server (storageServer:92)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 90, in connect_all
con.connect()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 233, in connect
self.validate()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 365, in validate
if not self.volinfo:
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 352, in volinfo
self._volinfo = self._get_gluster_volinfo()
File "/usr/lib/python3.6/site-packages/vdsm/storage/storageServer.py", line 405, in _get_gluster_volinfo
self._volfileserver)
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 56, in __call__
return callMethod()
File "/usr/lib/python3.6/site-packages/vdsm/common/supervdsm.py", line 54, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python3.6/multiprocessing/managers.py", line 772, in _callmethod
raise convert_to_error(kind, result)
vdsm.gluster.exception.GlusterXmlErrorException: XML error: rc=0 out=() err=[b'<cliOutput>\n <opRet>0</opRet>\n <opErrno>0</opErrno>\n <opErrstr />\n <volInfo>\n <volumes>\
n <volume>\n <name>data1</name>\n <id>d7eb2c38-2707-4774-9873-a7303d024669</id>\n <status>1</status>\n <statusStr>Started</statusStr>\n <sn
apshotCount>0</snapshotCount>\n <brickCount>2</brickCount>\n <distCount>2</distCount>\n <replicaCount>1</replicaCount>\n <arbiterCount>0</arbiterCount>
\n <disperseCount>0</disperseCount>\n <redundancyCount>0</redundancyCount>\n <type>0</type>\n <typeStr>Distribute</typeStr>\n <transport>0</tran
sport>\n <bricks>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/home/brick1<name>node1-teste.acloud.pt:/home/brick1</name><hostUuid>0
8c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n <brick uuid="08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b">node1-teste.acloud.pt:/brick2<name>nod
e1-teste.acloud.pt:/brick2</name><hostUuid>08c7ba5f-9aca-49c5-abfd-8a3e42dd8c0b</hostUuid><isArbiter>0</isArbiter></brick>\n </bricks>\n <optCount>23</optCount>\n
<options>\n <option>\n <name>nfs.disable</name>\n <value>on</value>\n </option>\n <option>\n <name>transport.addre
ss-family</name>\n <value>inet</value>\n </option>\n <option>\n <name>storage.fips-mode-rchecksum</name>\n <value>on</value>\n
</option>\n <option>\n <name>storage.owner-uid</name>\n <value>36</value>\n </option>\n <option>\n <name>storag
e.owner-gid</name>\n <value>36</value>\n </option>\n <option>\n <name>cluster.min-free-disk</name>\n <value>5%</value>\n
</option>\n <option>\n <name>performance.quick-read</name>\n <value>off</value>\n </option>\n <option>\n <name>perfor
mance.read-ahead</name>\n <value>off</value>\n </option>\n <option>\n <name>performance.io-cache</name>\n <value>off</value>\n
</option>\n <option>\n <name>performance.low-prio-threads</name>\n <value>32</value>\n </option>\n <option>\n <
name>network.remote-dio</name>\n <value>enable</value>\n </option>\n <option>\n <name>cluster.eager-lock</name>\n <value>enable<
/value>\n </option>\n <option>\n <name>cluster.quorum-type</name>\n <value>auto</value>\n </option>\n <option>\n
<name>cluster.server-quorum-type</name>\n <value>server</value>\n </option>\n <option>\n <name>cluster.data-self-heal-algorithm</name>\n
<value>full</value>\n </option>\n <option>\n <name>cluster.locking-scheme</name>\n <value>granular</value>\n </option>
\n <option>\n <name>cluster.shd-wait-qlength</name>\n <value>10000</value>\n </option>\n <option>\n <name>features.shar
d</name>\n <value>off</value>\n </option>\n <option>\n <name>user.cifs</name>\n <value>off</value>\n </option>\n
<option>\n <name>cluster.choose-local</name>\n <value>off</value>\n </option>\n <option>\n <name>client.event-threads</name>\
n <value>4</value>\n </option>\n <option>\n <name>server.event-threads</name>\n <value>4</value>\n </option>\n
<option>\n <name>performance.client-io-threads</name>\n <value>on</value>\n </option>\n </options>\n </volume>\n <count>1</count>\
n </volumes>\n </volInfo>\n</cliOutput>']
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [storage.storagedomaincache] Invalidating storage domain cache (sdc:74)
2022-04-27 13:40:07,125+0100 INFO (jsonrpc/4) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'id': 'dede3145-651a-4b01-b8d2-82bff8670696', 'status': 4106}]} from=
::ffff:192.168.5.165,42132, flow_id=4c170005, task_id=cec6f36f-46a4-462c-9d0a-feb8d814b465 (api:54)
2022-04-27 13:40:07,410+0100 INFO (jsonrpc/5) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,411+0100 INFO (jsonrpc/5) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:07,785+0100 INFO (jsonrpc/7) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=4fa4e8c4-7c65-499a-827e-8ae153aa875e (api:54)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:48)
2022-04-27 13:40:07,797+0100 INFO (jsonrpc/7) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=c6390f2a-845b-420b-a833-475605a24078 (api:54)
2022-04-27 13:40:07,802+0100 INFO (jsonrpc/7) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] START getAllVmStats() from=::1,37040 (api:48)
2022-04-27 13:40:11,980+0100 INFO (jsonrpc/6) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::1,37040 (api:54)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] START repoStats(domains=()) from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:48)
2022-04-27 13:40:12,365+0100 INFO (periodic/2) [vdsm.api] FINISH repoStats return={} from=internal, task_id=f5084096-e5c5-4ca8-9c47-a92fa5790484 (api:54)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] START getAllVmStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,417+0100 INFO (jsonrpc/0) [api.host] FINISH getAllVmStats return={'status': {'code': 0, 'message': 'Done'}, 'statsList': (suppressed)} from=::ffff:192.168.5.1
65,42132 (api:54)
2022-04-27 13:40:22,805+0100 INFO (jsonrpc/1) [api.host] START getStats() from=::ffff:192.168.5.165,42132 (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START repoStats(domains=()) from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH repoStats return={} from=::ffff:192.168.5.165,42132, task_id=a9fb939c-ea1a-4116-a22f-d14a99e6eada (api:54)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] START multipath_health() from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:48)
2022-04-27 13:40:22,816+0100 INFO (jsonrpc/1) [vdsm.api] FINISH multipath_health return={} from=::ffff:192.168.5.165,42132, task_id=5eee2f63-2631-446a-98dd-4947f9499f8f (api:54)
2022-04-27 13:40:22,822+0100 INFO (jsonrpc/1) [api.host] FINISH getStats return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} from=::ffff:192.168.5.165,42132 (
api:54)
--
Jose Ferradeira
http://www.logicworks.pt
2 years, 8 months
How can I set the host to maintenance mode?
by yp414@163.com
Set the maintenance mode on the host and add GPU on a single cluster host
Error performing operation: unable to switch the host to maintenance mode.
There is no host that can run the engine virtual machine.
How can I set the host to maintenance mode?
2 years, 8 months
grafana @ ovirt 4.5.0.6 "origin not allowed"
by diego.ercolani@ssis.sm
Hello,
in my current new installation ovirt-engine-4.5.0.6-1.el8.noarch I have a problem setting up the grafana monitoring portal.
I'm supposing that there is some problem regarding connection to ovirt DWH DB.
I tested the credentials stored in /etc/grafana/conf/provisioning/datasources/ovirt-dwh.yaml of the ovirt-engine and with psql it seem to work and also the DWH is populated by the engine.
But
connecting to configuration->data sources and trying to reconfigure the ovirt dwh datasource, when you click on "save & test" it appears a popup telling that the "origin not allowed".
I cannot find anything in the grafana log or the httpd log.
Can you help?
2 years, 8 months
Async release for ovirt-engine-appliance is now available
by Sandro Bonazzola
The oVirt community has just released a new version of the
ovirt-engine-appliance
(4.5-20220511122240) including ovirt-engine-4.5.0.8 and
ovirt-dependencies-4.5.2.
Full list of changes:
NetworkManager 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-libnm 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-team 1.39.0-1.el8
1.39.2-2.el8
NetworkManager-tui 1.39.0-1.el8
1.39.2-2.el8
binutils 2.30-113.el8
2.30-114.el8
centos-release-ovirt45 8.6-4.el8s
8.6-5.el8s
cloud-init 21.1-15.el8
22.1-1.el8
fapolicyd 1.1-4.el8
1.1-6.el8
fapolicyd-selinux 1.1-4.el8
1.1-6.el8
fribidi 1.0.4-8.el8
1.0.4-9.el8
gdisk 1.0.3-9.el8
1.0.3-11.el8
glib2 2.56.4-158.el8
2.56.4-159.el8
glibc 2.28-197.el8
2.28-199.el8
glibc-common 2.28-197.el8
2.28-199.el8
glibc-gconv-extra 2.28-197.el8
2.28-199.el8
glibc-langpack-en 2.28-197.el8
2.28-199.el8
grafana 7.5.11-2.el8
7.5.15-1.el8
gzip 1.9-12.el8
1.9-13.el8
kernel 4.18.0-373.el8
4.18.0-383.el8
kernel-core 4.18.0-373.el8
4.18.0-383.el8
kernel-modules 4.18.0-373.el8
4.18.0-383.el8
kernel-tools 4.18.0-373.el8
4.18.0-383.el8
kernel-tools-libs 4.18.0-373.el8
4.18.0-383.el8
krb5-libs 1.18.2-14.el8
1.18.2-17.el8
libgcc 8.5.0-12.el8
8.5.0-13.el8
libgfortran 8.5.0-12.el8
8.5.0-13.el8
libgomp 8.5.0-12.el8
8.5.0-13.el8
libquadmath 8.5.0-12.el8
8.5.0-13.el8
libstdc++ 8.5.0-12.el8
8.5.0-13.el8
mod_auth_openidc
2.3.7-11.module_el8.6.0+1083+4025e8c5
2.4.9.4-1.module_el8.7.0+1136+d8f380b8
openvswitch2.15 2.15.0-81.el8s
2.15.0-88.el8s
ovirt-dependencies 4.5.1-1.el8
4.5.2-1.el8
ovirt-engine 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-backend 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-dbscripts 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-restapi 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-base 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-cinderlib 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-imageio 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-ovirt-engine 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-ovirt-engine-common 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-vmconsole-proxy-helper 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-setup-plugin-websocket-proxy 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-tools 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-tools-backup 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-ui-extensions 1.3.2-1.el8
1.3.3-1.el8
ovirt-engine-vmconsole-proxy-helper 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-webadmin-portal 4.5.0.4-1.el8
4.5.0.8-1.el8
ovirt-engine-websocket-proxy 4.5.0.4-1.el8
4.5.0.8-1.el8
postgresql-jdbc 42.2.3-3.el8_2
42.2.14-1.el8
python3-dns 1.15.0-10.el8
1.15.0-11.el8
python3-openvswitch2.15 2.15.0-81.el8s
2.15.0-88.el8s
python3-ovirt-engine-lib 4.5.0.4-1.el8
4.5.0.8-1.el8
python3-perf 4.18.0-373.el8
4.18.0-383.el8
python3-slip 0.6.4-11.el8
0.6.4-13.el8
python3-slip-dbus 0.6.4-11.el8
0.6.4-13.el8
qemu-guest-agent
6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qemu-img
6.2.0-5.module_el8.6.0+1087+b42c8331
6.2.0-12.module_el8.7.0+1140+ff0772f9
qt5-srpm-macros 5.15.2-1.el8
5.15.3-1.el8
rsyslog 8.2102.0-8.el8
8.2102.0-9.el8
rsyslog-elasticsearch 8.2102.0-8.el8
8.2102.0-9.el8
rsyslog-gnutls 8.2102.0-8.el8
8.2102.0-9.el8
rsyslog-mmjsonparse 8.2102.0-8.el8
8.2102.0-9.el8
rsyslog-mmnormalize 8.2102.0-8.el8
8.2102.0-9.el8
selinux-policy 3.14.3-96.el8
3.14.3-97.el8
selinux-policy-targeted 3.14.3-96.el8
3.14.3-97.el8
virt-what 1.18-13.el8
1.18-14.el8
yajl 2.1.0-10.el8
2.1.0-11.el8
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
2 years, 8 months
Console error
by Andrea Chierici
Hi,
yesterday I've migrated from 4.4.10.7-1 to 4.5.0.7-1. Installation went
fine and no error at all was reported.
A problem anyway is now hitting my installation: if I try to open a vm
console, I get the error:
Failed to complete handshake Error in the pull function
It's not a problem on my browser/os since with another 4.4.x ovirt
installation, consoles work flawlessly. No modifications at all were
made to default settings on console.
Any suggestion on how to solve this issue? Haven't seen any previous
report on this.
Thanks,
Andrea
2 years, 8 months
oVirt and cluster level 4.7
by Gianluca Cecchi
Hello,
I've not had the chance to try 4.5 yet, but in a thread I saw reference to
a supposed cluster level 4.7.
I didn't find anything special in release notes about it, its features and
matrix of compatibility levels usable in 4.5 (only 4.7 up or what?)
I found this bugzilla, now closed, where in some part it was asked to file
a doc part
https://bugzilla.redhat.com/show_bug.cgi?id=2021545
Any further information?
BTW: what are the versions of libvirt and qemu-kvm used in latest 4.5.x?
Thanks,
Gianluca
2 years, 8 months
OVIRT Engine Restore from old Backup
by Abe E
Hey All, I really dont like spamming the forums but the issues I am having keep changing entirely.
I am at the point where I can get much closer to rebuilding my engine although my hurtle currently is restoring.
When I restore using my most recent backup it is problematic
Currently I am restoring by:
clean engine install
engine-cleanup
engine-backup --mode=restore --file=ovirt-engine-backup-20220504011012.backup --log=restore5.log
engine-setup
Comes up with alot of annoyances as the ID of the old hosted_storage and the new one are mismatched in the GUI.
On top of that it is not allowing me to deploy the engine to the 2nd Gluster (3rd one keeps deleting its glusters on reboot since 4.3 Upgrade).
Host 1 which runs the hosted engine is not showing as active or unlike, rather it is unresponsive, when I tried to enroll certificates it finally accepted but failed because it said its missing networks, once added it crashed the engine and ha agent wouldnt come up. Now that I think of it, I could have removed the extra networks from being required although it still doesnt benefit me here as I am unable to deploy or reinstall the 2nd Gluster node and the first one will not show its correct state as the host.
Should I just be deploying the DB, or is the full file backup fine? My issue here is it takes long to redeploy so the trial an error of failing takes me hours as my backup is from 4.5 engine even though my nodes are on 4.4 as 4.5 keeps causing the catastrophic failure of my whole cluster.
2 years, 8 months