basic suite 4.3 failed in live_storage_migration
by Yedidyah Bar David
Hi all,
Please see [1][2].
lago.log [3]:
2020-01-20 11:55:16,647::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error
while running thread Thread-72
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
60, in wrapper
return func(get_test_prefix(), *args, **kwargs)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
79, in wrapper
prefix.virt_env.engine_vm().get_api(api_ver=4), *args, **kwargs
File "/home/jenkins/agent/workspace/ovirt-system-tests_standard-check-patch/ovirt-system-tests/basic-suite-4.3/test-scenarios/004_basic_sanity.py",
line 514, in live_storage_migration
lambda: api.follow_link(disk_service.get().storage_domains[0]).name
== SD_ISCSI_NAME
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
286, in assert_true_within_long
assert_equals_within_long(func, True, allowed_exceptions)
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
273, in assert_equals_within_long
func, value, LONG_TIMEOUT, allowed_exceptions=allowed_exceptions
File "/usr/lib/python2.7/site-packages/ovirtlago/testlib.py", line
252, in assert_equals_within
'%s != %s after %s seconds' % (res, value, timeout)
AssertionError: False != True after 600 seconds
Not sure, but this might be related:
engine.log [4]:
2020-01-20 06:45:13,991-05 DEBUG
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-92)
[ae1529f8-3e05-4bc9-b3bf-d058f45dfb2b] Exception:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to
VmReplicateDiskFinishVDS, error = General Exception: ('Timed out
during operation: cannot acquire state change lock (held by
monitor=remoteDispatchDomainGetBlockInfo)',), code = 100
vdsm.log [5]:
2020-01-20 06:45:13,940-0500 ERROR (jsonrpc/1) [api] FINISH
diskReplicateFinish error=Timed out during operation: cannot acquire
state change lock (held by monitor=remoteDispatchDomainGetBlockInfo)
(api:134)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line
124, in method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 580, in
diskReplicateFinish
return self.vm.diskReplicateFinish(srcDisk, dstDisk)
File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4650,
in diskReplicateFinish
blkJobInfo = self._dom.blockJobInfo(drive.name, 0)
File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 108, in f
raise toe
TimeoutError: Timed out during operation: cannot acquire state change
lock (held by monitor=remoteDispatchDomainGetBlockInfo)
Looked a bit, and can't find the root cause.
Thanks and best regards,
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/7684/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[3] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[4] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
[5] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/768...
--
Didi
4 years, 11 months
migration on migration network failed
by Dominik Holler
Hello,
is live migration on ovirt-master via migration network currently expected
to work?
Currently, it fails in OST network-suite-master with
2020-01-21 22:49:20,991-0500 INFO (jsonrpc/2) [api.virt] START
migrate(params={'abortOnError': 'true', 'autoConverge': 'true', 'dst': '
192.168.201.3:54321', 'method': 'online', 'vmId':
'736dea3b-64be-427f-9ebf-d1e758b6f68e', 'src': '192.168.201.4', 'dstqemu':
'192.0.3.1', 'convergenceSchedule': {'init': [{'name': 'setDowntime',
'params': ['100']}], 'stalling': [{'limit': 1, 'action': {'name':
'setDowntime', 'params': ['150']}}, {'limit': 2, 'action': {'name':
'setDowntime', 'params': ['200']}}, {'limit': 3, 'action': {'name':
'setDowntime', 'params': ['300']}}, {'limit': 4, 'action': {'name':
'setDowntime', 'params': ['400']}}, {'limit': 6, 'action': {'name':
'setDowntime', 'params': ['500']}}, {'limit': -1, 'action': {'name':
'abort', 'params': []}}]}, 'outgoingLimit': 2, 'enableGuestEvents': True,
'tunneled': 'false', 'encrypted': False, 'compressed': 'false',
'incomingLimit': 2}) from=::ffff:192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:48)
2020-01-21 22:49:20,997-0500 INFO (jsonrpc/2) [api.virt] FINISH migrate
return={'status': {'code': 0, 'message': 'Migration in progress'},
'progress': 0} from=::ffff:192.168.201.2,43782,
flow_id=5c0e0e0a-8d5f-4b66-bda3-acca1e626a41,
vmId=736dea3b-64be-427f-9ebf-d1e758b6f68e (api:54)
2020-01-21 22:49:20,997-0500 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
call VM.migrate succeeded in 0.01 seconds (__init__:312)
2020-01-21 22:49:21,099-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore:
acquiring (migration:405)
2020-01-21 22:49:21,099-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Migration semaphore: acquired
(migration:407)
2020-01-21 22:49:21,837-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Creation of destination VM
took: 0 seconds (migration:459)
2020-01-21 22:49:21,838-0500 INFO (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') starting migration to
qemu+tls://192.168.201.3/system with miguri tcp://192.0.3.1 (migration:525)
2020-01-21 22:49:21,870-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') operation failed: Failed to
connect to remote libvirt URI qemu+tls://192.168.201.3/system: unable to
connect to server at '192.168.201.3:16514': Connection refused
(migration:278)
2020-01-21 22:49:22,816-0500 ERROR (migsrc/736dea3b) [virt.vm]
(vmId='736dea3b-64be-427f-9ebf-d1e758b6f68e') Failed to migrate
(migration:441)
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 422,
in _regular_run
time.time(), machineParams
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 528,
in _startUnderlyingMigration
self._perform_with_conv_schedule(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 609,
in _perform_with_conv_schedule
self._perform_migration(duri, muri)
File "/usr/lib/python3.6/site-packages/vdsm/virt/migration.py", line 545,
in _perform_migration
self._migration_flags)
File "/usr/lib/python3.6/site-packages/vdsm/virt/virdomain.py", line 101,
in f
ret = attr(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1838, in
migrateToURI3
if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed',
dom=self)
libvirt.libvirtError: operation failed: Failed to connect to remote libvirt
URI qemu+tls://192.168.201.3/system: unable to connect to server at '
192.168.201.3:16514': Connection refused
2020-01-21 22:49:22,989-0500 INFO (jsonrpc/7) [api.host] START
getAllVmStats() from=::ffff:192.168.201.2,43782 (api:48)
2020-01-21 22:49:22,992-0500 INFO (jsonrpc/7) [throttled] Current
getAllVmStats: {'736dea3b-64be-427f-9ebf-d1e758b6f68e': 'Up'}
(throttledlog:104)
Please find details in
https://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-master/1245/
.
Dominik
4 years, 11 months
Promotion of Eyal Shenitzky to RHV Storage team lead
by Tal Nisan
Hello,
Today I have the pleasure of announcing Eyal Shenitzky's promotion to team
lead in the RHV Storage team.
Eyal joined our team in August 2017 and since then became an integral part
of the team and had a key role in features such as DR, Cinberlib,
Incremental Backup, LSM and among others.
For the last months Eyal also took the role of the team's scrum master and
maintaining of our team's Kanban board and tasks.
Please join me in congratulating Eyal!
4 years, 11 months
'dnf update qemu-kvm' fails: libspice-server, libseccomp
by Yedidyah Bar David
Hi all,
I have a CentOS8 VM, last updated (successfully) a week ago. Now, 'dnf
update' fails with many errors. 'dnf update ovirt-release-master'
worked but didn't help. Trying to isolate most/all of them by 'dnf
update qemu-kvm', I get:
Problem: package qemu-kvm-15:4.1.0-16.el8.x86_64 requires
qemu-kvm-core = 15:4.1.0-16.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
qemu-kvm-15:2.12.0-65.module_el8.0.0+189+f9babebb.5.x86_64
- nothing provides libspice-server.so.1(SPICE_SERVER_0.14.2)(64bit)
needed by qemu-kvm-core-15:4.1.0-16.el8.x86_64
- nothing provides libseccomp >= 2.4.0 needed by
qemu-kvm-core-15:4.1.0-16.el8.x86_64
Known issue?
'dnf update --nobest' does seem to work (posting this email while it
still downloads the appliance), and says:
=====================================================================
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
libguestfs x86_64 1:1.40.2-14.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
2.7 M
libguestfs-tools-c x86_64 1:1.40.2-14.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
5.6 M
qemu-img x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
1.8 M
qemu-kvm-block-curl x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
93 k
qemu-kvm-block-gluster x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
95 k
qemu-kvm-block-iscsi x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
101 k
qemu-kvm-block-rbd x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
95 k
qemu-kvm-block-ssh x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
96 k
qemu-kvm-common x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
1.2 M
vdsm-http noarch
4.40.0-1513.git0adaae655.el8
ovirt-master-snapshot
14 k
vdsm-http noarch
4.40.0-1518.gitfde24a6b5.el8
ovirt-master-snapshot
14 k
Skipping packages with broken dependencies:
ovirt-imageio-daemon noarch 1.6.3-0.el8
ovirt-master-snapshot
39 k
qemu-kvm x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
82 k
qemu-kvm-core x86_64 15:4.1.0-16.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
3.3 M
vdsm x86_64
4.40.0-1513.git0adaae655.el8
ovirt-master-snapshot
1.3 M
vdsm x86_64
4.40.0-1518.gitfde24a6b5.el8
ovirt-master-snapshot
1.3 M
vdsm-hook-ethtool-options noarch
4.40.0-1518.gitfde24a6b5.el8
ovirt-master-snapshot
9.2 k
vdsm-hook-fcoe noarch
4.40.0-1518.gitfde24a6b5.el8
ovirt-master-snapshot
9.6 k
vdsm-hook-vmfex-dev noarch
4.40.0-1518.gitfde24a6b5.el8
ovirt-master-snapshot
10 k
virt-v2v x86_64 1:1.40.2-14.el8
copr:copr.fedorainfracloud.org:sbonazzo:AdvancedVirtualization
13 M
=====================================================================
Best regards,
--
Didi
4 years, 11 months
OST CI: Time issue?
by Yedidyah Bar David
Hi all,
I looked at failure of [1].
It timed out while waiting for the engine to be up, after restarting
the engine VM.
lago.log [2] has:
2020-01-20 04:16:06,026::008_restart_he_vm.py::_start_he_vm::173::root::INFO::Starting
VM...
...
2020-01-20 04:16:09,823::008_restart_he_vm.py::_start_he_vm::178::root::INFO::Waiting
for VM to be UP...
...
2020-01-20 04:16:25,113::008_restart_he_vm.py::_start_he_vm::186::root::INFO::VM
is UP.
2020-01-20 04:16:25,113::008_restart_he_vm.py::_wait_for_engine_health::190::root::INFO::Waiting
for engine to start...
...
Then there is a loop of running 'hosted-engine --vm-status --json' and
parsing the status, specifically checking the health. The health
status does not change until the timeout (after 10 minutes), but after
'vm', there is 'detail', which does change. First one is 'Powering
up':
2020-01-20 04:16:25,113::ssh.py::ssh::89::lago.ssh::DEBUG::Command
9ef791c0 on lago-he-basic-role-remote-suite-4-3-host-0 output:
{"1": {"conf_on_shared_storage": true, "live-data": true, "extra":
"metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=5199
(Sun Jan 19 23:16:24
2020)\nhost-id=1\nscore=2400\nvm_conf_refresh_time=5199 (Sun Jan 19
23:16:24 2020)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n",
"hostname": "lago-he-basic-role-remote-suite-4-3-host-0.lago.local",
"host-id": 1, "engine-status": {"reason": "bad vm status", "health":
"bad", "vm": "up", "detail": "Powering up"}, "score": 2400, "stopped":
false, "maintenance": false, "crc32": "e7f42f2a",
"local_conf_timestamp": 5199, "host-ts": 5199}, "2":
{"conf_on_shared_storage": true, "live-data": false, "extra":
"metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=5190
(Sun Jan 19 23:16:15
2020)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=5190 (Sun Jan 19
23:16:15 2020)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n",
"hostname": "lago-he-basic-role-remote-suite-4-3-host-1", "host-id":
2, "engine-status": {"reason": "vm not running on this host",
"health": "bad", "vm": "down", "detail": "unknown"}, "score": 3400,
"stopped": false, "maintenance": false, "crc32": "c3657e8b",
"local_conf_timestamp": 5190, "host-ts": 5190}, "global_maintenance":
true}
...
Then a few similar ones, last one is from '2020-01-20 04:17:22,980',
and then it changes to 'Up':
2020-01-20 04:17:27,517::ssh.py::ssh::89::lago.ssh::DEBUG::Command
c426f53a on lago-he-basic-role-remote-suite-4-3-host-0 output:
{"1": {"conf_on_shared_storage": true, "live-data": true, "extra":
"metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=5259
(Sun Jan 19 23:17:24
2020)\nhost-id=1\nscore=2928\nvm_conf_refresh_time=5259 (Sun Jan 19
23:17:24 2020)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n",
"hostname": "lago-he-basic-role-remote-suite-4-3-host-0.lago.local",
"host-id": 1, "engine-status": {"reason": "failed liveliness check",
"health": "bad", "vm": "up", "detail": "Up"}, "score": 2928,
"stopped": false, "maintenance": false, "crc32": "08d4bfe1",
"local_conf_timestamp": 5259, "host-ts": 5259}, "2":
{"conf_on_shared_storage": true, "live-data": true, "extra":
"metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=5250
(Sun Jan 19 23:17:16
2020)\nhost-id=2\nscore=3400\nvm_conf_refresh_time=5251 (Sun Jan 19
23:17:16 2020)\nconf_on_shared_storage=True\nmaintenance=False\nstate=GlobalMaintenance\nstopped=False\n",
"hostname": "lago-he-basic-role-remote-suite-4-3-host-1", "host-id":
2, "engine-status": {"reason": "vm not running on this host",
"health": "bad", "vm": "down", "detail": "unknown"}, "score": 3400,
"stopped": false, "maintenance": false, "crc32": "ab277249",
"local_conf_timestamp": 5251, "host-ts": 5250}, "global_maintenance":
true}
/var/log/messages of the engine vm [3] has:
Jan 19 23:17:57 lago-he-basic-role-remote-suite-4-3-engine kernel:
Initializing cgroup subsys cpuset
Meaning, the first line after the reboot has a timestamp that's almost
2 minutes after starting the VM. That's a long time. That's half a
minute after detail changed to 'Up', above (no idea where we take this
'detail' from, though). Then, first line that is not from the kernel:
Jan 19 23:17:58 lago-he-basic-role-remote-suite-4-3-engine systemd[1]:
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA
-APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL
+XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
then it starts lots of services, and then:
Jan 19 23:20:31 lago-he-basic-role-remote-suite-4-3-engine systemd:
Started oVirt Engine websockets proxy.
Jan 19 23:20:33 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: Selected source 45.33.103.94
Jan 19 23:20:37 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: System clock wrong by 3.919225 seconds, adjustment
started
Jan 19 23:20:37 lago-he-basic-role-remote-suite-4-3-engine systemd:
Time has been changed
Jan 19 23:20:37 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: System clock was stepped by 3.919225 seconds
Jan 19 23:20:38 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: Selected source 129.250.35.250
Jan 19 23:20:43 lago-he-basic-role-remote-suite-4-3-engine cloud-init:
Cloud-init v. 18.5 running 'modules:config' at Mon, 20 Jan 2020
04:20:39 +0000. Up 182.82 seconds.
and later:
Jan 19 23:21:32 lago-he-basic-role-remote-suite-4-3-engine systemd:
Started Update UTMP about System Runlevel Changes.
Jan 19 23:21:32 lago-he-basic-role-remote-suite-4-3-engine systemd:
Startup finished in 22.344s (kernel) + 32.237s (initrd) + 3min 815ms
(userspace) = 3min 55.397s.
Jan 19 23:21:43 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: Selected source 206.55.191.142
Jan 19 23:23:52 lago-he-basic-role-remote-suite-4-3-engine
chronyd[872]: Selected source 129.250.35.250
Jan 19 23:26:32 lago-he-basic-role-remote-suite-4-3-engine systemd:
Created slice User Slice of root.
engine.log's [4] first line after the reboot is:
2020-01-19 23:26:18,251-05 INFO
[org.ovirt.engine.core.uutils.config.ShellLikeConfd] (ServerService
Thread Pool -- 103) [] Loaded file
'/usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.conf'.
which is a mere 40 seconds before the timeout, and server.log's [5]
first line is:
2020-01-19 23:23:13,392-05 INFO [org.jboss.as.server.deployment] (MSC
service thread 1-2) WFLYSRV0027: Starting deployment of "apidoc.war"
(runtime-name: "apidoc.war")
which is a 3 minutes before that.
I do not know much about what the engine does during its startup, and
how the logs should look, and how long it should take between startup
and a working health page. If this all looks normal, then perhaps we
should simply give it more than 10 minutes. Otherwise, there is either
heavy load on the infra, or perhaps some NTP problems.
Anyone has a clue?
Anyway, now pushed a patch [6] to allow up to 20 minutes, at least to
reduce the noise.
Thanks and best regards,
[1] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-role-remote-sui...
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-role-remote-sui...
[3] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-role-remote-sui...
[4] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-role-remote-sui...
[5] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-role-remote-sui...
[6] https://gerrit.ovirt.org/106411
--
Didi
4 years, 11 months
maven central brings some surprises
by Fedor Gavrilov
Hi,
Maven central seems to be not in the mood today for me:
[ERROR] Plugin org.apache.maven.plugins:maven-jar-plugin:2.2 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-jar-plugin:jar:2.2: Could not transfer artifact org.apache.maven.plugins:maven-jar-plugin:pom:2.2 from/to central (http://repo1.maven.org/maven2): Failed to transfer file: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-jar-plugin/2.... Return code is: 501 , ReasonPhrase:HTTPS Required. -> [Help 1]
Does anyone know if that's somehow a proxy thing or something else? I'm asking because I had a joy of dealing with lying proxy before.
Also, iirc jcenter had a reputation of being more reliable, can we use it instead somehow?
Thanks,
Fedor
4 years, 11 months
Restore fails whether engine is on or off
by raymond.francis@oracle.com
Wondering whether this is an issue;
I had an engine where i created a backup and stored elsewhere. This engine was reinstalled and the backup file (called file_name) used to restore below. However i get 2 different errors based on whether engine is on....
[root@dub-mgrfarm113 ~]# service ovirt-engine status
Redirecting to /bin/systemctl status ovirt-engine.service
● ovirt-engine.service - oVirt Engine
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-01-16 16:13:41 GMT; 2h 5min ago
Main PID: 5621 (ovirt-engine.py)
CGroup: /system.slice/ovirt-engine.service
├─5621 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify st...
└─5661 ovirt-engine -server -XX:+TieredCompilation -Xms1991M -Xmx1991M -Xss1M -Djava.awt.headless=true -Dsun.rmi.dgc.clien...
Jan 16 16:13:40 dub-mgrfarm113.ie.oracle.com systemd[1]: Starting oVirt Engine...
Jan 16 16:13:40 dub-mgrfarm113.ie.oracle.com ovirt-engine.py[5621]: 2020-01-16 16:13:40,816+0000 ovirt-engine: INFO _detectJBossV...l=36
Jan 16 16:13:41 dub-mgrfarm113.ie.oracle.com ovirt-engine.py[5621]: 2020-01-16 16:13:41,795+0000 ovirt-engine: INFO _detectJBossV...'[]'
Jan 16 16:13:41 dub-mgrfarm113.ie.oracle.com systemd[1]: Started oVirt Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@dub-mgrfarm113 ~]# engine-backup --mode=restore --file=file_name --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: file_name
log file: /var/log/ovirt-engine-backup/ovirt-engine-restore-20200116181932.log
Preparing to restore:
FATAL: Engine service is active - can not restore backup
or off.......
[root@dub-mgrfarm113 ~]# service ovirt-engine stop
Redirecting to /bin/systemctl stop ovirt-engine.service
[root@dub-mgrfarm113 ~]# engine-backup --mode=restore --file=file_name --restore-permissions
Start of engine-backup with mode 'restore'
scope: all
archive file: file_name
log file: /var/log/ovirt-engine-backup/ovirt-engine-restore-20200116181952.log
Preparing to restore:
- Unpacking file 'file_name'
Restoring:
- Files
FATAL: Can't connect to database 'engine'. Please see '/usr/bin/engine-backup --help'.
In addition, possibly related, the web page for the server will no longer launch even when ovirt-engine and httpd services are running while.
Any troubleshooting tips or has anyone seen same before?
4 years, 11 months
FYI for Intellij IDEA & CentOS 7 users
by Fedor Gavrilov
Hi,
Just wanted to share in case someone will have same issue as me.
Recent version of IDEA requires new libgcc which is not available for CentOS 7, which has 2.17:
Exception in thread "JavaFX Application Thread" java.lang.UnsatisfiedLinkError: /home/fgavrilo/Software/idea-IC-193.5662.53/jbr/lib/libjfxmedia.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found
The simplest solution to this is probably using flatpak distribution:
sudo yum install flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub com.jetbrains.IntelliJ-IDEA-Community
flatpak run com.jetbrains.IntelliJ-IDEA-Community
Fedor
4 years, 11 months