[ovirt-users] WG: High Database Load after updating to oVirt 4.0.4
Roy Golan
rgolan at redhat.com
Thu Jan 12 08:13:15 UTC 2017
On 11 January 2017 at 17:16, Grundmann, Christian <
Christian.Grundmann at fabasoft.com> wrote:
> Hi,
>
> I updated to 4.0.6 today and again hitting this Problem can anyone plz
> help?
>
>
>
> backend_start | query_start |
> state_change | waiting | state
> | query
>
> -------------------------------+----------------------------
> ---+-------------------------------+---------+--------------
> -------+----------------------------------------------------
> --------------------------------------
>
> 2017-01-11 15:52:41.612942+01 | 2017-01-11 16:14:45.676881+01 | 2017-01-11
> 16:14:45.676882+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:35.526771+01 | 2017-01-11 16:14:45.750546+01 | 2017-01-11
> 16:14:45.750547+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:41.133303+01 | 2017-01-11 16:14:42.89794+01 | 2017-01-11
> 16:14:42.897991+01 | f | idle | SELECT 1
>
> 2017-01-11 14:48:43.504048+01 | 2017-01-11 16:14:46.794742+01 | 2017-01-11
> 16:14:46.794813+01 | f | idle | SELECT option_value
> FROM vdc_options WHERE option_name = 'DisconnectDwh'
>
> 2017-01-11 14:48:43.531955+01 | 2017-01-11 16:14:34.541273+01 | 2017-01-11
> 16:14:34.543513+01 | f | idle | COMMIT
>
> 2017-01-11 14:48:43.564148+01 | 2017-01-11 16:14:34.543635+01 | 2017-01-11
> 16:14:34.544145+01 | f | idle | COMMIT
>
> 2017-01-11 14:48:43.569029+01 | 2017-01-11 16:00:01.86664+01 | 2017-01-11
> 16:00:01.866711+01 | f | idle in transaction | SELECT 'continueAgg',
> '1' +
>
> |
> | | | | FROM
> history_configuration
> +
>
> |
> | |
> | | WHERE var_name = 'lastHourAggr'
> +
>
> |
> | | | | AND
> var_datetime < '2017-01-11 15:00:00.000000+0100'
> +
>
> |
> | | | |
>
> 2017-01-11 14:48:43.572644+01 | 2017-01-11 14:48:43.57571+01 | 2017-01-11
> 14:48:43.575736+01 | f | idle | SET extra_float_digits
> = 3
>
> 2017-01-11 14:48:43.577039+01 | 2017-01-11 14:48:43.580066+01 | 2017-01-11
> 14:48:43.58009+01 | f | idle | SET extra_float_digits
> = 3
>
> 2017-01-11 14:48:54.308078+01 | 2017-01-11 16:14:46.931422+01 | 2017-01-11
> 16:14:46.931423+01 | f | active | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 14:48:54.465485+01 | 2017-01-11 16:14:21.113926+01 | 2017-01-11
> 16:14:21.113959+01 | f | idle | COMMIT
>
> 2017-01-11 15:52:41.606561+01 | 2017-01-11 16:14:45.839754+01 | 2017-01-11
> 16:14:45.839755+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.477555+01 | 2017-01-11 16:14:45.276255+01 | 2017-01-11
> 16:14:45.277038+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 15:52:41.736304+01 | 2017-01-11 16:14:44.48134+01 | 2017-01-11
> 16:14:44.48134+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.489949+01 | 2017-01-11 16:14:46.40924+01 | 2017-01-11
> 16:14:46.409241+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.618773+01 | 2017-01-11 16:14:45.732394+01 | 2017-01-11
> 16:14:45.732394+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 14:48:56.497824+01 | 2017-01-11 16:14:46.827751+01 | 2017-01-11
> 16:14:46.827752+01 | f | active | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 14:48:56.497732+01 | 2017-01-11 16:09:04.207597+01 | 2017-01-11
> 16:09:04.342567+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 14:48:58.785162+01 | 2017-01-11 16:14:46.093658+01 | 2017-01-11
> 16:14:46.093659+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.620421+01 | 2017-01-11 16:14:46.224543+01 | 2017-01-11
> 16:14:46.224543+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.620478+01 | 2017-01-11 16:14:46.009864+01 | 2017-01-11
> 16:14:46.009865+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.647839+01 | 2017-01-11 16:14:46.834005+01 | 2017-01-11
> 16:14:46.834005+01 | f | active | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 14:48:58.929402+01 | 2017-01-11 16:14:44.908748+01 | 2017-01-11
> 16:14:44.908749+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.756257+01 | 2017-01-11 16:14:46.193542+01 | 2017-01-11
> 16:14:46.193542+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.766689+01 | 2017-01-11 16:14:46.453393+01 | 2017-01-11
> 16:14:46.453394+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 15:52:41.848642+01 | 2017-01-11 16:14:29.04013+01 | 2017-01-11
> 16:14:29.080273+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 16:03:06.731047+01 | 2017-01-11 16:13:43.332298+01 | 2017-01-11
> 16:13:43.333075+01 | f | idle | select * from
> getallfromcluster($1, $2)
>
> 2017-01-11 16:03:18.282962+01 | 2017-01-11 16:14:44.56195+01 | 2017-01-11
> 16:14:44.56195+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:03:18.305949+01 | 2017-01-11 16:14:46.483223+01 | 2017-01-11
> 16:14:46.483223+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:03:18.905939+01 | 2017-01-11 16:14:45.090399+01 | 2017-01-11
> 16:14:45.0904+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:03:25.100944+01 | 2017-01-11 16:14:46.887946+01 | 2017-01-11
> 16:14:46.887947+01 | f | active | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 16:03:25.118964+01 | 2017-01-11 16:14:45.866665+01 | 2017-01-11
> 16:14:45.866666+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:04:00.255077+01 | 2017-01-11 16:13:51.184443+01 | 2017-01-11
> 16:13:51.184499+01 | f | idle | select * from
> getqosbyqosid($1)
>
> 2017-01-11 16:04:12.591564+01 | 2017-01-11 16:14:45.849935+01 | 2017-01-11
> 16:14:45.849935+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:04:13.778038+01 | 2017-01-11 16:14:46.138704+01 | 2017-01-11
> 16:14:46.138705+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:04:14.734107+01 | 2017-01-11 16:12:28.219276+01 | 2017-01-11
> 16:12:28.219372+01 | f | idle | select * from
> getqosbyqosid($1)
>
> 2017-01-11 16:04:15.098427+01 | 2017-01-11 16:14:45.049351+01 | 2017-01-11
> 16:14:45.049352+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:04:15.549806+01 | 2017-01-11 16:14:45.942699+01 | 2017-01-11
> 16:14:45.942699+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:04:35.623004+01 | 2017-01-11 16:14:45.277261+01 | 2017-01-11
> 16:14:45.278013+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 16:05:17.307564+01 | 2017-01-11 16:14:41.567496+01 | 2017-01-11
> 16:14:41.568274+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 16:05:11.805966+01 | 2017-01-11 16:14:46.851024+01 | 2017-01-11
> 16:14:46.851024+01 | f | active | select * from
> getsnapshotbyleafguid($1)
>
> 2017-01-11 16:05:17.430004+01 | 2017-01-11 16:10:23.506252+01 | 2017-01-11
> 16:10:23.582274+01 | f | idle | select * from
> getstorage_domains_by_storagepoolid($1, $2, $3)
>
> 2017-01-11 16:05:25.482896+01 | 2017-01-11 16:14:45.0316+01 | 2017-01-11
> 16:14:45.0316+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:05:25.547646+01 | 2017-01-11 16:11:10.067981+01 | 2017-01-11
> 16:11:10.068043+01 | f | idle | select * from
> getqosbyqosid($1)
>
> 2017-01-11 16:05:34.070317+01 | 2017-01-11 16:14:46.293573+01 | 2017-01-11
> 16:14:46.293573+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:05:35.479037+01 | 2017-01-11 16:14:45.699444+01 | 2017-01-11
> 16:14:45.699445+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:05:35.500181+01 | 2017-01-11 16:14:46.221274+01 | 2017-01-11
> 16:14:46.221274+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:05:48.391458+01 | 2017-01-11 16:14:46.443046+01 | 2017-01-11
> 16:14:46.443047+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:40.116008+01 | 2017-01-11 16:14:45.181865+01 | 2017-01-11
> 16:14:45.181866+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:23.595959+01 | 2017-01-11 16:14:46.126082+01 | 2017-01-11
> 16:14:46.126083+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:24.582543+01 | 2017-01-11 16:14:46.074258+01 | 2017-01-11
> 16:14:46.074258+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:24.660881+01 | 2017-01-11 16:13:39.930702+01 | 2017-01-11
> 16:13:39.95559+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 16:06:24.690863+01 | 2017-01-11 16:07:28.763627+01 | 2017-01-11
> 16:07:28.763684+01 | f | idle | select * from
> getqosbyqosid($1)
>
> 2017-01-11 16:06:26.244997+01 | 2017-01-11 16:14:45.760047+01 | 2017-01-11
> 16:14:45.760048+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:26.359194+01 | 2017-01-11 16:14:46.90043+01 | 2017-01-11
> 16:14:46.929003+01 | f | idle | select * from
> getvdsbyvdsid($1, $2, $3)
>
> 2017-01-11 16:06:26.377649+01 | 2017-01-11 16:14:45.035936+01 | 2017-01-11
> 16:14:45.035937+01 | f | active | select * from
> getdisksvmguid($1, $2, $3, $4)
>
> 2017-01-11 16:06:40.128282+01 | 2017-01-11 16:12:43.764245+01 | 2017-01-11
> 16:12:43.764293+01 | f | idle | select * from
> getqosbyqosid($1)
>
> 2017-01-11 16:06:40.150762+01 | 2017-01-11 16:10:54.629416+01 | 2017-01-11
> 16:10:54.629496+01 | f | idle | select * from
> getstoragedomainidsbystoragepoolidandstatus($1, $2)
>
> 2017-01-11 16:14:46.934168+01 | 2017-01-11 16:14:46.964807+01 | 2017-01-11
> 16:14:46.964809+01 | f | active | select
> backend_start,query_start,state_change,waiting,state,query from
> pg_stat_activity;
>
>
>
> top - 16:13:43 up 1:36, 3 users, load average: 41.62, 37.53, 21.37
>
> Tasks: 286 total, 40 running, 246 sleeping, 0 stopped, 0 zombie
>
> %Cpu(s): 99.3 us, 0.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si,
> 0.0 st
>
> KiB Mem : 16432648 total, 3626184 free, 6746368 used, 6060096 buff/cache
>
> KiB Swap: 5242876 total, 5242876 free, 0 used. 8104244 avail Mem
>
>
>
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
>
> 17328 postgres 20 0 3485020 66760 20980 R 21.5 0.4 1:37.27
> postgres: engine engine 127.0.0.1(35212) SELECT
>
> 8269 postgres 20 0 3484796 68200 22780 R 21.2 0.4 2:15.36
> postgres: engine engine 127.0.0.1(47116) SELECT
>
> 9016 postgres 20 0 3520356 279756 199452 R 21.2 1.7 7:24.29
> postgres: engine engine 127.0.0.1(42992) SELECT
>
> 16878 postgres 20 0 3498100 66160 18468 R 21.2 0.4 2:00.82
> postgres: engine engine 127.0.0.1(34238) SELECT
>
> 16751 postgres 20 0 3486388 215784 169404 R 20.9 1.3 1:56.38
> postgres: engine engine 127.0.0.1(34008) SELECT
>
> 17868 postgres 20 0 3487860 215472 167796 R 20.9 1.3 1:07.40
> postgres: engine engine 127.0.0.1(36312) SELECT
>
> 8272 postgres 20 0 3490392 76912 25288 R 20.5 0.5 2:30.15
> postgres: engine engine 127.0.0.1(47124) SELECT
>
> 8274 postgres 20 0 3495800 83144 26100 R 20.5 0.5 2:56.66
> postgres: engine engine 127.0.0.1(47130) SELECT
>
> 9015 postgres 20 0 3523344 283388 198908 R 20.5 1.7 7:19.91
> postgres: engine engine 127.0.0.1(42990) SELECT
>
> 16879 postgres 20 0 3488296 72180 23744 R 20.5 0.4 1:30.01
> postgres: engine engine 127.0.0.1(34242) SELECT
>
> 17241 postgres 20 0 3486540 215716 168024 R 20.5 1.3 1:47.58
> postgres: engine engine 127.0.0.1(35018) SELECT
>
> 17242 postgres 20 0 3495864 69172 20988 R 20.5 0.4 1:54.09
> postgres: engine engine 127.0.0.1(35022) SELECT
>
> 17668 postgres 20 0 3488576 54484 15080 R 20.5 0.3 1:28.91
> postgres: engine engine 127.0.0.1(35896) SELECT
>
> 8266 postgres 20 0 3490688 222344 170852 R 20.2 1.4 2:58.95
> postgres: engine engine 127.0.0.1(47112) SELECT
>
> 8268 postgres 20 0 3503420 241888 177500 R 20.2 1.5 3:10.34
> postgres: engine engine 127.0.0.1(47117) SELECT
>
> 8275 postgres 20 0 3510316 253340 181688 R 20.2 1.5 4:12.02
> postgres: engine engine 127.0.0.1(47132) SELECT
>
> 9014 postgres 20 0 3523872 284636 199424 R 20.2 1.7 7:51.82
> postgres: engine engine 127.0.0.1(42988) SELECT
>
> 9027 postgres 20 0 3514872 265384 189656 R 20.2 1.6 5:21.63
> postgres: engine engine 127.0.0.1(43012) SELECT
>
> 17546 postgres 20 0 3475628 55248 19108 R 20.2 0.3 1:33.40
> postgres: engine engine 127.0.0.1(35668) SELECT
>
> 17669 postgres 20 0 3483284 66920 22488 R 20.2 0.4 1:28.01
> postgres: engine engine 127.0.0.1(35898) SELECT
>
> 17670 postgres 20 0 3504988 78300 22032 R 20.2 0.5 1:18.96
> postgres: engine engine 127.0.0.1(35900) SELECT
>
> 17865 postgres 20 0 3485084 66688 21316 R 20.2 0.4 1:14.00
> postgres: engine engine 127.0.0.1(36306) SELECT
>
> 7869 postgres 20 0 3492780 224272 171620 R 19.9 1.4 2:57.03
> postgres: engine engine 127.0.0.1(46542) SELECT
>
>
>
> Thx Christian
>
>
>
There is not iowait but only cpu contention and we have this idle
tranaction. Is this the whole output of pg_activity?
Also please add:
- select * from pg_locks;
- select relname,n_tup_upd,n_dead_tup from pg_stat_user_tables order by
n_dead_tup desc limit 30;
To lower down to pressure, try stopping dwh and see how the system behaves
>
>
>
>
> *Von:* users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] *Im
> Auftrag von *Grundmann, Christian
> *Gesendet:* Donnerstag, 29. September 2016 15:29
> *An:* 'users at ovirt.org' <users at ovirt.org>
> *Betreff:* [ovirt-users] WG: High Database Load after updating to oVirt
> 4.0.4
>
>
>
> Maybe an side effect off this bug?
>
>
>
> https://bugzilla.redhat.com/1302752
>
>
>
> I did a restore to 4.0.3 and the timeouts are gone
>
>
>
> *Von:* Grundmann, Christian
> *Gesendet:* Dienstag, 27. September 2016 10:33
> *An:* users at ovirt.org
> *Betreff:* High Database Load after updating to oVirt 4.0.4
>
>
>
> After 4.0.4 Update we have a very high database load during startup of VMs
>
> So high that the api calls getting timeouts
>
>
>
> I attached the output of
>
> select * from pg_stat_activity
>
>
>
>
>
> is there a way to downgrade to 4.0.3?
>
>
>
> Thx Christian
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170112/cfd652ee/attachment-0001.html>
More information about the Users
mailing list