Dashboard very slow to load: change default page possible?
by Gianluca Cecchi
Hi,
I'm noticing that after upgrading to 4.3.8, sometimes the dashboard is very
slow and it takes minutes to display and to let me work, even if I'm not
interested at all to the dashboard for the activities I have to do.
As a consequence it becomes a bottleneck.
On engine I see that during these moments the postmaster process related to
ovirt_engine_history database is 100% cpu consuming.
Almost every minor update I run vacuum but I don't know if there is
something else involved.
Is there any bug open with in in 4.3.8?
Is there any option to set another page as the landing one when I open web
admin portal?
I have similar problem on a RHV environment where I have done upgrade form
4.2.7 to 4.2.8 and then 4.3.8 in the same day (eventually I'm going to open
a case fir this one).
When passing from 4.2.7 to 4.2.8 I chose full vacuum for the engine history
database:
Perform full vacuum on the oVirt engine history
database ovirt_engine_history@localhost?
This operation may take a while depending on this setup health
and the
configuration of the db vacuum process.
See https://www.postgresql.org/docs/9.0/static/sql-vacuum.html
(Yes, No) [No]: Yes
while I chose NO during the upgrade from 4.2.8 to 4.3.8 that I've done half
an hour later.
Thanks in advance,
Gianluca
4 years, 10 months
IO Storage Error / All findings / Need help.
by Christian Reiss
Hello folks,
I am running a 3-way, no arbiter Gluster setup using oVirt and contained
Gluster 6.7. After a crash we are unable to start any VMs due to Storage
IO error.
After much, much backtracking and debugging we are closing in on the
symptons, albeit not the issue.
Conditions:
- gluster volume is healthy,
- No outstanding heal or split-brain files,
- 3 way without arbiter nodes (3 copies),
- I already ran several "heal full" commands.
Gluster Volume Info
Volume Name: ssd_storage
Type: Replicate
Volume ID: d84ec99a-5db9-49c6-aab4-c7481a1dc57b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node01.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick2: node02.company.com:/gluster_bricks/ssd_storage/ssd_storage
Brick3: node03.company.com:/gluster_bricks/ssd_storage/ssd_storage
Options Reconfigured:
cluster.self-heal-daemon: enable
cluster.granular-entry-heal: enable
storage.owner-gid: 36
storage.owner-uid: 36
network.ping-timeout: 30
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
performance.strict-o-direct: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
Gluster Volume Status
Status of volume: ssd_storage
Gluster process TCP Port RDMA Port
Online Pid
------------------------------------------------------------------------------
Brick node01.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
8218
Brick node02.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
23595
Brick node03.company.com:/gluster_br
icks/ssd_storage/ssd_storage 49152 0 Y
8080
Self-heal Daemon on localhost N/A N/A Y
66028
Self-heal Daemon on 10.100.200.12 N/A N/A Y
52087
Self-heal Daemon on node03.company.com
et N/A N/A Y
8372
Task Status of Volume ssd_storage
------------------------------------------------------------------------------
There are no active volume tasks
The mounted path where the oVirt vm files reside is 100% okay, we copied
all the images out there onto standalone hosts and the images run just
fine. There is no obvious data corruption. However launching any VM out
of oVirt fails with "IO Storage Error".
This is where everything gets funny.
oVirt uses a vdsm user to access all the files.
Findings:
- root can read, edit and write all files inside the ovirt mounted
gluster path.
- vdsm user can write to new files regardless of size without any
issues; changes get replicated instantly to other nodes.
- vdsm user can append to existing files regardless of size without
any issues; changes get replicated instantly to other nodes.
- vdsm user can read files if those files are smaller than 64mb.
- vdsm user gets permission denied errors if the file to be read is
65mb or bigger.
- vdsm user gets permission denied errors if the requests crosses a
gluster shard-file boundary.
- if root does a "dd if=file_larger_than64mb" of=/dev/null" on any
large file, the file can then be read by the vdsm user on that single
node. Changes do not get replicated to other nodes.
Example:
id of the vdsm user & sudo to them:
[vdsm@node01:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[vdsm@node02:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[vdsm@node03:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ id
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),179(sanlock)
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
Create a file >64mb on one node:
[vdsm@node03:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ base64 /dev/urandom | head -c 200000000 > file.txt
[vdsm@node03:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ ls -lha
total 191M
drwxr-xr-x. 2 vdsm kvm 30 Feb 4 13:10 .
drwxr-xr-x. 6 vdsm kvm 80 Jan 1 1970 ..
-rw-r--r--. 1 vdsm kvm 191M Feb 4 13:10 file.txt
File is instantly available on another node:
[vdsm@node01:/rhev/data-center/mnt/glusterSD/node01.company.com:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/test]
$ ls -lha
total 191M
drwxr-xr-x. 2 vdsm kvm 30 Feb 4 13:10 .
drwxr-xr-x. 6 vdsm kvm 80 Jan 1 1970 ..
-rw-r--r--. 1 vdsm kvm 191M Feb 4 13:10 file.txt
Accessing the whole file fails:
[vdsm@node01:] dd if=file.txt of=/dev/null
dd: error reading ‘file.txt’: Permission denied
131072+0 records in
131072+0 records out
67108864 bytes (67 MB) copied, 0.0651919 s, 1.0 GB/s
Reading first 64mb works, 65mb (crossing boundary) does not:
[vdsm@node01:] $ dd if=file.txt bs=1M count=64 of=/dev/null
64+0 records in
64+0 records out
67108864 bytes (67 MB) copied, 0.00801663 s, 8.4 GB/s
[vdsm@node01:] $ dd if=file.txt bs=1M count=65 of=/dev/null
dd: error reading ‘file.txt’: Permission denied
64+0 records in
64+0 records out
67108864 bytes (67 MB) copied, 0.00908712 s, 7.4 GB/s
Attaching/ appending to the file works (not crossing bounary):
[vdsm@node01:] $ date >> file.txt
[vdsm@node01:] $
[vdsm@node02:] $ tail -n2 file.txt
E16ACZaLqLhx2oUUUov5JHvQcVFohn6HH+eog6XZCiTaG0Tue 4 Feb 13:18:37 CET 2020
Reading the file beginning & end works, if it crosses the boundary not
so much:
[vdsm@node02:] $ head file.txt
jrZOxGaGvwfpGSwn1BKWWmFC4556KNzXsD2BCwY78tnV1mRY54IxnE+hbnszRyWgVuXhBpVRoJTp
xvVwktZwSytMyvJjsSt7pQbXbHSY66tRe/rvrw5dHr3RNJn9HjqtlKQ9mHVX4ch1HkU5posSmDbg
vwzxBTXWfxLDMmIghyTgBTSFiI9Xg8W6htxDpxrbO+10EzlnaN1Am5tAlTkfrorNLyihpiQhUPGG
ag6tJUcFj3IySGRTAxnStFRQoBXN5dlyx1Sqc4s/Tpl7gkgR8+I7UcdRKISjgcGcpW+zrXKqFF/H
Dwv6ql+2ysPRrtlbt2V8Zf697VsNX5DTgZS9BKmWlAeqejNYaqG5Rsuhn7szbCfkkmsjedk+Rdcv
A3SHMBeHXdtfBHS0AlbEwKgeml08NmCUcwnifhrQywCnu8NN9+RQ3cUxGvIuLLSzi3915wC6hbxr
8xArckQfSUfKA/hrHvoiiCGZU9D23xj3XXtsjdbIIDXATDnCPrKANdvGN5LTKal8bT0jXORfAz1z
MniqVUgvWVNcviPgQ9BfT5qpGo8g7LaoBMGamAGVX6Ezrs04rk8jQ1yz1bB/8URfTRLZdyYkMh0u
MB4xMylnyavgusoi7Duf5RuYJvNaL0g8Lx/cfGpGsGwdD2Lj/qRC45ammn6wCxDVfiJV6Z/TzJcY
PBvzWK5xT++PQgMV8EwtXwA1kFqaGrcuiDHejMQ8O82Edjr+eBCBe0B7bRddoMD6oOlhNm1YsSNt
[vdsm@node02:] $ tail file.txt
9JX8OWCJwbyvEPDyyI30H1/jPZfDo1sS11dZ2JjiO7qhB45VaU8+irG45D0GGJhFf8wE8TD9EGWG
8346QHLX9ZSFsbjpuh71hr5Ju1UduVdvIDwwP8WDBtRUbMAVvsyGR33rkpijepmUjmYl/jeZ7rsC
VyUVlmG5PxrI7KKxz5dSkzApqVHKKgsf93JMDAdPwvXTq4hhZdUJ581w9FC/f9k2wWldEGkAcyB0
cCKp+VJl2vx989KUoqAJzsrvYdK0X7itruqYdpC29JXode+7NixUflhKvPdKmitBYyCEgCcyxUyn
eyMOdaan2x8d8MztLLoWLpp+gLzl2Hev7y3OXq6I9SVN2t+hcVIz8Llmumy0cD+VC4u2/UZszYqS
nDaSSMs35agGUUgIpHjPxCRf/yqnfrJJMTGAcxSEqHtpEdsjEmkf4QkyEgEZ13f4oi7P/DFCIIvV
JBsHzOLDoetnFzAA2/RqbDflPrVWcAR7tXVqGLACCj2s19uUFSNb8nBWmEk8fFz31iJhuL43v0WE
78/THl49T0hhzHQp6kdIiw5p1zPUIFGBZ0BS4mBCHxu+tMlPZe1zWJMJZdPnvDNtHZ4gQ6LFgU4w
E16ACZaLqLhx2oUUUov5JHvQcVFohn6HH+eog6XZCiTaG0Tue 4 Feb 13:18:37 CET 2020
[vdsm@node02:] $ dd if=file.txt of=/dev/null
dd: error reading ‘file.txt’: Permission denied
131072+0 records in
131072+0 records out
67108864 bytes (67 MB) copied, 0.106097 s, 633 MB/s
if root does dd first, all is peachy:
[root@node02] # dd if=file.txt of=/dev/null
390625+1 records in
390625+1 records out
200000058 bytes (200 MB) copied, 0.345906 s, 578 MB/s
[vdsm@node02] $ dd if=file.txt of=/dev/null
390625+1 records in
390625+1 records out
200000058 bytes (200 MB) copied, 0.188451 s, 1.1 GB/s
Error in the gluster.log:
[2020-02-04 12:27:57.915356] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-1:
remote operation failed. Path:
/.shard/57200f4f-537d-4e56-9258-38fe6ac64c4e.2
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-02-04 12:27:57.915404] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-0:
remote operation failed. Path:
/.shard/57200f4f-537d-4e56-9258-38fe6ac64c4e.2
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-02-04 12:27:57.915472] W [MSGID: 114031]
[client-rpc-fops_v2.c:2634:client4_0_lookup_cbk] 0-ssd_storage-client-2:
remote operation failed. Path:
/.shard/57200f4f-537d-4e56-9258-38fe6ac64c4e.2
(00000000-0000-0000-0000-000000000000) [Permission denied]
[2020-02-04 12:27:57.915490] E [MSGID: 133010]
[shard.c:2327:shard_common_lookup_shards_cbk] 0-ssd_storage-shard:
Lookup on shard 2 failed. Base file gfid =
57200f4f-537d-4e56-9258-38fe6ac64c4e [Permission denied]
What we tried:
- restarting single hosts,
- restarting the entire cluster,
- doing stuff like find /rhev .. exec stats{}\ ;
- dd'ing (read) all of the mount dir...
We are out of ideas and also no experts on either gluster nor ovirt, it
seems.
And this is supposed to be a production HA environment. Any help would
be appreciated.
I hope I did think of all the relevant data and logs.
--
with kind regards,
mit freundlichen Gruessen,
Christian Reiss
4 years, 10 months
oVirt behavior with thin provision/deduplicated block storage
by Alan G
Hi,
I have an oVirt cluster with a storage domain hosted on a FC storage array that utilises block de-duplication technology. oVirt reports the capacity of the domain as though the de-duplication factor was 1:1, which of course is not the case. So what I would like to understand is the likely behavior of oVirt when the used space approaches the reported capacity. Particularly around the critical action space blocker.
Thanks,
Alan
4 years, 10 months
New Host Migration Failure & Console Failure
by Jonathan Mathews
Good Day
I have installed new oVirt platform with hosted engine, but when I add a
new host, the notification keeps repeating "Finished Activating Host" and
it does not stop until I select do not disturb for 1 day. (Screenshot
attached)
Also once the new host has been added, I am unable to migrate a VM to it,
if I start a VM up on the new host, I am unable to migrate the VM away from
it or launch a console.
Please see the following logs from when I tried to migrate a VM to the new
host.
2020-02-19 09:48:27,034Z INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-55)
[0de8c750-2955-40f3-ab75-a33d29894199] Running command:
MigrateVmToServerCommand internal: false. Entities affected : ID:
7b0b6e6d-d099-43e0-933f-3c335b54a3a1 Type: VMAction group MIGRATE_VM with
role type USER
2020-02-19 09:48:27,113Z INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-55)
[0de8c750-2955-40f3-ab75-a33d29894199] START, MigrateVDSCommand(
MigrateVDSCommandParameters:{hostId='df715653-daf4-457e-839d-95683ab21234',
vmId='7b0b6e6d-d099-43e0-933f-3c335b54a3a1', srcHost='
host03.timefreight.co.za', dstVdsId='896b7f02-00e9-405c-b166-ec103a7f9ee8',
dstHost='host01.timefreight.co.za:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='62',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='172.10.10.1'}), log id: 44b23456
2020-02-19 09:48:27,114Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-55) [0de8c750-2955-40f3-ab75-a33d29894199] START,
MigrateBrokerVDSCommand(HostName = host03.timefreight.co.za,
MigrateVDSCommandParameters:{hostId='df715653-daf4-457e-839d-95683ab21234',
vmId='7b0b6e6d-d099-43e0-933f-3c335b54a3a1', srcHost='
host03.timefreight.co.za', dstVdsId='896b7f02-00e9-405c-b166-ec103a7f9ee8',
dstHost='host01.timefreight.co.za:54321', migrationMethod='ONLINE',
tunnelMigration='false', migrationDowntime='0', autoConverge='true',
migrateCompressed='false', consoleAddress='null', maxBandwidth='62',
enableGuestEvents='true', maxIncomingMigrations='2',
maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime,
params=[100]}], stalling=[{limit=1, action={name=setDowntime,
params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}},
{limit=3, action={name=setDowntime, params=[300]}}, {limit=4,
action={name=setDowntime, params=[400]}}, {limit=6,
action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort,
params=[]}}]]', dstQemu='172.10.10.1'}), log id: 566212f6
2020-02-19 09:48:27,122Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand]
(default task-55) [0de8c750-2955-40f3-ab75-a33d29894199] FINISH,
MigrateBrokerVDSCommand, return: , log id: 566212f6
2020-02-19 09:48:27,126Z INFO
[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (default task-55)
[0de8c750-2955-40f3-ab75-a33d29894199] FINISH, MigrateVDSCommand, return:
MigratingFrom, log id: 44b23456
2020-02-19 09:48:27,138Z INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-10) [] VM
'7b0b6e6d-d099-43e0-933f-3c335b54a3a1'(accpac) moved from 'MigratingFrom'
--> 'Up'
2020-02-19 09:48:27,138Z INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-10) [] Adding VM
'7b0b6e6d-d099-43e0-933f-3c335b54a3a1'(accpac) to re-run list
2020-02-19 09:48:27,141Z ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-10) [] Rerun VM
'7b0b6e6d-d099-43e0-933f-3c335b54a3a1'. Called from VDS '
host03.timefreight.co.za'
2020-02-19 09:48:27,143Z INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-55) [0de8c750-2955-40f3-ab75-a33d29894199] EVENT_ID:
VM_MIGRATION_START(62), Migration started (VM: accpac, Source:
host03.timefreight.co.za, Destination: host01.timefreight.co.za, User:
admin@internal-authz).
2020-02-19 09:48:27,199Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-92544) [] START,
MigrateStatusVDSCommand(HostName = host03.timefreight.co.za,
MigrateStatusVDSCommandParameters:{hostId='df715653-daf4-457e-839d-95683ab21234',
vmId='7b0b6e6d-d099-43e0-933f-3c335b54a3a1'}), log id: 4978bfea
2020-02-19 09:48:27,203Z INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-92544) [] FINISH,
MigrateStatusVDSCommand, return: , log id: 4978bfea
2020-02-19 09:48:27,242Z ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-92544) [] EVENT_ID:
VM_MIGRATION_TO_SERVER_FAILED(120), Migration failed (VM: accpac, Source:
host03.timefreight.co.za, Destination: host01.timefreight.co.za).
2020-02-19 09:48:27,254Z INFO
[org.ovirt.engine.core.bll.MigrateVmToServerCommand]
(EE-ManagedThreadFactory-engine-Thread-92544) [] Lock freed to object
'EngineLock:{exclusiveLocks='[7b0b6e6d-d099-43e0-933f-3c335b54a3a1=VM]',
sharedLocks=''}'
The first host is running oVirt 4.3.7, the Hosted-Engine is running oVirt
4.3.8 and the new host is running oVirt 4.3.8
I would really appreciate some assistance.
Sincerely
Jonathan Mathews
4 years, 10 months
Re: iSCSI Domain Addition Fails
by eevans@digitaldatatechs.com
Do you have a lun associated with the target?
Eric Evans
Digital Data Services LLC.
304.660.9080
From: Robert Webb <rwebb(a)ropeguru.com>
Sent: Sunday, February 23, 2020 9:06 AM
To: users(a)ovirt.org
Subject: [ovirt-users] iSCSI Domain Addition Fails
So I am messing around with FreeNAS and iSCSI. FreeNAS has a target
configured, it is discoverable in oVirt, but then I click "OK" nothing
happens.
I have a name for the domain defined and have expanded the advanced
features, but cannot find it anything showing an error.
oVirt 4.3.8
4 years, 10 months
Re: iSCSI Domain Addition Fails
by Benny Zlotnik
anything in the vdsm or engine logs?
On Sun, Feb 23, 2020 at 4:23 PM Robert Webb <rwebb(a)ropeguru.com> wrote:
>
> Also, I did do the “Login” to connect to the target without issue, from what I can tell.
>
>
>
> From: Robert Webb
> Sent: Sunday, February 23, 2020 9:06 AM
> To: users(a)ovirt.org
> Subject: iSCSI Domain Addition Fails
>
>
>
> So I am messing around with FreeNAS and iSCSI. FreeNAS has a target configured, it is discoverable in oVirt, but then I click “OK” nothing happens.
>
>
>
> I have a name for the domain defined and have expanded the advanced features, but cannot find it anything showing an error.
>
>
>
> oVirt 4.3.8
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FMAXFDMNHVG...
4 years, 10 months
Re: iSCSI Domain Addition Fails
by Robert Webb
Also, I did do the "Login" to connect to the target without issue, from what I can tell.
From: Robert Webb
Sent: Sunday, February 23, 2020 9:06 AM
To: users(a)ovirt.org
Subject: iSCSI Domain Addition Fails
So I am messing around with FreeNAS and iSCSI. FreeNAS has a target configured, it is discoverable in oVirt, but then I click "OK" nothing happens.
I have a name for the domain defined and have expanded the advanced features, but cannot find it anything showing an error.
oVirt 4.3.8
4 years, 10 months
iSCSI Domain Addition Fails
by Robert Webb
So I am messing around with FreeNAS and iSCSI. FreeNAS has a target configured, it is discoverable in oVirt, but then I click "OK" nothing happens.
I have a name for the domain defined and have expanded the advanced features, but cannot find it anything showing an error.
oVirt 4.3.8
4 years, 10 months
dashboard data empty
by Fabrizio
Hello everyone.
I intalled 4 node with hosted engine, everything works fine exept the dashboard. it doesn't collect any data and is always empty of information.
searching around I found out that the problem could be the dwh service, inside the log there is an error:
## log file
2020-02-19 17:46:14|ETL Service Started
ovirtEngineDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt_engine_history?sslfactory=org.postgresql.ssl.NonValidatingFactory
hoursToKeepDaily|0
hoursToKeepHourly|720
ovirtEngineDbPassword|**********************
runDeleteTime|3
ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfactory=org.postgresql.ssl.NonValidatingFactory
runInterleave|60
limitRows|limit 1000
ovirtEngineHistoryDbUser|ovirt_engine_history
ovirtEngineDbUser|engine
deleteIncrement|10
timeBetweenErrorEvents|300000
hoursToKeepSamples|24
deleteMultiplier|1000
lastErrorSent|2011-07-03 12:46:47.000000
etlVersion|4.3.8
dwhAggregationDebug|false
dwhUuid|5c4ffc1b-928f-4d06-a40f-bb34ba11c7f2
ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
ovirtEngineHistoryDbPassword|**********************
Exception in component tJDBCInput_2
org.postgresql.util.PSQLException: Bad value for type short : 201905
at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getShort(AbstractJdbc2ResultSet.java:2098)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.tJDBCInput_2Process(OsEnumUpdate.java:2354)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.tRowGenerator_1Process(OsEnumUpdate.java:1921)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.tJDBCInput_4Process(OsEnumUpdate.java:1376)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.tJDBCConnection_1Process(OsEnumUpdate.java:873)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.tJDBCConnection_2Process(OsEnumUpdate.java:736)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.runJobInTOS(OsEnumUpdate.java:4396)
at ovirt_engine_dwh.osenumupdate_4_3.OsEnumUpdate.runJob(OsEnumUpdate.java:4099)
at ovirt_engine_dwh.samplerunjobs_4_3.SampleRunJobs.tRunJob_4Process(SampleRunJobs.java:947)
at ovirt_engine_dwh.samplerunjobs_4_3.SampleRunJobs.tJDBCConnection_2Process(SampleRunJobs.java:767)
at ovirt_engine_dwh.samplerunjobs_4_3.SampleRunJobs.tJDBCConnection_1Process(SampleRunJobs.java:642)
at ovirt_engine_dwh.samplerunjobs_4_3.SampleRunJobs$2.run(SampleRunJobs.java:2683)
## SERVICE STATUS
● ovirt-engine-dwhd.service - oVirt Engine Data Warehouse
Loaded: loaded (/usr/lib/systemd/system/ovirt-engine-dwhd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-02-19 17:46:14 CET; 16h ago
Main PID: 6425 (ovirt-engine-dw)
CGroup: /system.slice/ovirt-engine-dwhd.service
├─6425 /usr/bin/python /usr/share/ovirt-engine-dwh/services/ovirt-engine-dwhd/ovirt-engine-dwhd.py --redirect-output --systemd=notify start
└─6512 ovirt-engine-dwhd -Dorg.ovirt.engine.dwh.settings=/tmp/tmparNn_x/settings.properties -Xms1g -Xmx1g -classpath /usr/share/ovirt-engine-dwh/lib/*::/usr/share/java/dom4j.jar:/usr/share/java/apache-commons-collections.jar:/usr/share/java/postgresql-jdbc.jar ovirt_engine_dwh.historyetl_4_3.HistoryETL --context=Default
Feb 19 17:46:14 engine.ovirt.local systemd[1]: Starting oVirt Engine Data Warehouse...
Feb 19 17:46:14 engine.ovirt.local systemd[1]: Started oVirt Engine Data Warehouse.
4 years, 10 months