From: "Gilad Chaplik" <gchaplik(a)redhat.com>
To: "Liran Zelkha" <liran.zelkha(a)gmail.com>
Cc: "Yaniv Dary" <ydary(a)redhat.com>, "Kobi Ianko"
<kobi(a)redhat.com>, devel(a)linode01.ovirt.org, "engine-devel"
<engine-devel(a)ovirt.org>
Sent: Thursday, April 10, 2014 11:07:12 AM
Subject: Re: [Devel] [Engine-devel] vds_dynamic refactor
----- Original Message -----
> From: "Liran Zelkha" <liran.zelkha(a)gmail.com>
> To: "Yaniv Dary" <ydary(a)redhat.com>
> Cc: "Gilad Chaplik" <gchaplik(a)redhat.com>, "Kobi Ianko"
<kobi(a)redhat.com>,
> devel(a)linode01.ovirt.org, "engine-devel"
> <engine-devel(a)ovirt.org>
> Sent: Wednesday, April 9, 2014 4:22:39 PM
> Subject: Re: [Devel] [Engine-devel] vds_dynamic refactor
>
> On Wed, Apr 9, 2014 at 3:34 PM, Yaniv Dary <ydary(a)redhat.com> wrote:
> >
> > Why not move only status with changes a lot to statistics and leave
> > everything as is?
> >
> >
> Exactly. No need for a new table. Use the existing ones.
Why should the dwh read entire dynamic record for each status
change/available resources change? (for VMs as well).
The DWH doesn't work like that. We create views over dynamic\statistics and
dynamic\static they go the the configuration and stats tables on the history db side.
From dynamic\statistics we collect every minute no matter what and
from dynamic\static we collect only when update_date changes in static.
We can not
rely on the dynamic update_date since it changes so much, so from my point of view if we
can cause minimal changes in dynamic and move status to statistics it would the best
thing.
What we have currently is a task that run once a hour and checks if any change was made in
the dynamic table (via very ugly joins and rejects) and sync is there was any change.
It would even be better if you also move swap_size which doesn't change much to
dynamic.
>
> >
> >
> > Yaniv
> >
> > ________________________________
> >
> > From: "Liran Zelkha" <liran.zelkha(a)gmail.com>
> > To: "Gilad Chaplik" <gchaplik(a)redhat.com>
> > Cc: "Kobi Ianko" <kobi(a)redhat.com>, devel(a)linode01.ovirt.org,
> > "engine-devel" <engine-devel(a)ovirt.org>
> > Sent: Monday, April 7, 2014 8:51:00 AM
> >
> > Subject: Re: [Devel] [Engine-devel] vds_dynamic refactor
> >
> >
> >
> >
> > On Sun, Apr 6, 2014 at 11:03 PM, Gilad Chaplik <gchaplik(a)redhat.com>
> > wrote:
> >>
> >> ----- Original Message -----
> >> > From: "Liran Zelkha" <liran.zelkha(a)gmail.com>
> >> > To: "Gilad Chaplik" <gchaplik(a)redhat.com>
> >> > Cc: "Kobi Ianko" <kobi(a)redhat.com>,
devel(a)linode01.ovirt.org,
> >> > "engine-devel" <engine-devel(a)ovirt.org>
> >> > Sent: Sunday, April 6, 2014 8:51:02 PM
> >> > Subject: Re: [Devel] [Engine-devel] vds_dynamic refactor
> >> >
> >> > On Sun, Apr 6, 2014 at 6:32 PM, Gilad Chaplik
<gchaplik(a)redhat.com>
> >> > wrote:
> >> >
> >> > > ----- Original Message -----
> >> > > > From: "Liran Zelkha"
<liran.zelkha(a)gmail.com>
> >> > > > To: "Kobi Ianko" <kobi(a)redhat.com>
> >> > > > Cc: "Gilad Chaplik" <gchaplik(a)redhat.com>,
> >> > > > devel(a)linode01.ovirt.org,
> >> > > "engine-devel" <engine-devel(a)ovirt.org>
> >> > > > Sent: Sunday, April 6, 2014 3:40:13 PM
> >> > > > Subject: Re: [Devel] [Engine-devel] vds_dynamic refactor
> >> > > >
> >> > > > On Sun, Apr 6, 2014 at 3:37 PM, Kobi Ianko
<kobi(a)redhat.com>
> >> > > > wrote:
> >> > > >
> >> > > > > Joining in...
> >> > > > > From my point of view, in real life a user should have
that many
> >> > > > > VDSs
> >> > > on
> >> > > > > one Engine (from a DB point of view).
> >> > > > > Modern DB system handles tables with millions of
records and
> >> > > > > many
> >> > > > > relations, Do we really have a performance issue here?
> >> > > > > We could prefer a more easy to maintain implantation in
this
> >> > > > > case
> >> > > > > over
> >> > > DB
> >> > > > > performance
> >> > > > >
> >> > > > > Yes we do. We make many queries on the VDS view, which
is a VERY
> >> > > complex
> >> > > > view.
> >> > > >
> >> > >
> >> > > Actually I quite agree with Kobi, what is the plan for VMs? why
do
> >> > > we
> >> > > start with VDS...
> >> > > what is the biggest deploy do you know of?
> >> > >
> >> > We start with VDS because in an idle system, with 200 hosts and
> >> > several
> >> > thousands VMs, this is what you get as the top queries against the
> >> > database. Look at how many times getvds is called.
> >> > [image: Inline image 1]
> >> > BTW - the second query is an example of abusing the dynamic query
> >> > mechanism. The 4th query (an update command) is a set of useless
> >> > update_vds_dynamic commands.
> >> >
> >> > For reference, the explain plan of get VDS is something like this:
> >> >
> >> > QUERY PLAN
> >> >
> >> >
-----------------------------------------------------------------------------------------------------------------------------------------------------------
> >> > Nested Loop (cost=9.30..46.75 rows=6 width=9060) (actual
> >> > time=0.063..0.068 rows=1 loops=1)
> >> > Join Filter: (vds_static.vds_id = vds_statistics.vds_id)
> >> > -> Seq Scan on vds_statistics (cost=0.00..1.01 rows=1
width=109)
> >> > (actual time=0.008..0.008 rows=1 loops=1)
> >> > -> Nested Loop (cost=9.30..45.64 rows=6 width=8983) (actual
> >> > time=0.048..0.052 rows=1 loops=1)
> >> > Join Filter: (vds_groups.vds_group_id =
> >> > vds_static.vds_group_id)
> >> > -> Nested Loop Left Join (cost=0.00..9.29 rows=1
> >> > width=1389)
> >> > (actual time=0.013..0.013 rows=1 loops=1)
> >> > -> Seq Scan on vds_groups (cost=0.00..1.01 rows=1
> >> > width=1271) (actual time=0.003..0.003 rows=1 loops=1)
> >> > -> Index Scan using pk_storage_pool on
storage_pool
> >> > (cost=0.00..8.27 rows=1 width=134) (actual time=0.008..0.008 rows=1
> >> > loops=1)
> >> > Index Cond: (vds_groups.storage_pool_id = id)
> >> > -> Hash Right Join (cost=9.30..36.28 rows=6 width=7610)
> >> > (actual
> >> > time=0.033..0.037 rows=1 loops=1)
> >> > Hash Cond: (vds_spm_id_map.vds_id = vds_static.vds_id)
> >> > -> Seq Scan on vds_spm_id_map (cost=0.00..22.30
> >> > rows=1230
> >> > width=20) (actual time=0.003..0.003 rows=1 loops=1)
> >> > -> Hash (cost=9.29..9.29 rows=1 width=7606)
(actual
> >> > time=0.019..0.019 rows=1 loops=1)
> >> > Buckets: 1024 Batches: 1 Memory Usage: 2kB
> >> > -> Nested Loop (cost=0.00..9.29 rows=1
> >> > width=7606)
> >> > (actual time=0.012..0.013 rows=1 loops=1)
> >> > -> Seq Scan on vds_dynamic
> >> > (cost=0.00..1.01
> >> > rows=1 width=1895) (actual time=0.006..0.006 rows=1 loops=1)
> >> > -> Index Scan using pk_vds_static on
> >> > vds_static
> >> > (cost=0.00..8.27 rows=1 width=5711) (actual time=0.005..0.006 rows=1
> >> > loops=1)
> >> > Index Cond: (vds_id =
> >> > vds_dynamic.vds_id)
> >> > Total runtime: 0.299 ms
> >> > (19 rows)
> >> >
> >> > It's terrible. Adding any additional join will make this worse.
Please
> >> > don't add any more tables...
> >>
> >> Thank you for the detailed explanation, my comments:
> >>
> >> * a very long time isn't an argument for not adding another table
> >> (should
> >> be neglectable);
> >> currently we have an unrelated problem, we need to solve it.
> >
> > Of course it is. A very long time for a query that you execute many times
> > is THE factor. Who said the join has no performance effect? Have you
> > tested it? Under load? Under many writes/updates?
> >>
> >>
> >> * > We start with VDS because in an idle system, with 200 hosts and
> >> several
> >> > thousands VMs, this is what you get as the top queries against the
> >> > database.
> >>
> >> so, if fetching VMs takes 10 minutes? and its get called a single time?
> >
> > Where do you see 10 minutes? If you are looking at the red bar it's the
> > inherent time - total query time * number of queries.
> >>
> >>
> >> * you didn't reply on my of my suggestion of constructing the VDS
> >> records
> >> in the DB without using joins.
> >
> > If you mean materialized views - we don't have it in Postgres just yet...
> > And even if we do, since we do many updates to vds_statistics and
> > vds_dynamic - I'm not sure it will have positive impact on our
> > performance. If you mean joins in the database - everything that is based
> > on VDS is done in the database. Part of the problem, since we can cache
> > some information and only query the dynamic/statistics part of VDS, but
> > that's another matter.
> >>
> >>
> >>
> >> >
> >> > >
> >> > > > >
> >> > > > > ----- Original Message -----
> >> > > > > > From: "Gilad Chaplik"
<gchaplik(a)redhat.com>
> >> > > > > > To: "Liran Zelkha"
<liran.zelkha(a)gmail.com>
> >> > > > > > Cc: devel(a)linode01.ovirt.org,
"engine-devel"
> >> > > > > > <engine-devel(a)ovirt.org
> >> > > >
> >> > > > > > Sent: Sunday, April 6, 2014 3:32:26 PM
> >> > > > > > Subject: Re: [Devel] [Engine-devel] vds_dynamic
refactor
> >> > > > > >
> >> > > > > > ----- Original Message -----
> >> > > > > > > From: "Liran Zelkha"
<liran.zelkha(a)gmail.com>
> >> > > > > > > To: "Gilad Chaplik"
<gchaplik(a)redhat.com>
> >> > > > > > > Cc: "Itamar Heim"
<iheim(a)redhat.com>,
> >> > > > > > > devel(a)linode01.ovirt.org,
> >> > > > > > > "engine-devel"
<engine-devel(a)ovirt.org>
> >> > > > > > > Sent: Sunday, April 6, 2014 3:26:24 PM
> >> > > > > > > Subject: Re: [Engine-devel] vds_dynamic
refactor
> >> > > > > > >
> >> > > > > > > On Sun, Apr 6, 2014 at 3:18 PM, Gilad
Chaplik
> >> > > > > > > <gchaplik(a)redhat.com
> >> > > >
> >> > > > > wrote:
> >> > > > > > >
> >> > > > > > > > ----- Original Message -----
> >> > > > > > > > > From: "Itamar Heim"
<iheim(a)redhat.com>
> >> > > > > > > > > To: "Liran Zelkha"
<liran.zelkha(a)gmail.com>
> >> > > > > > > > > Cc: "Gilad Chaplik"
<gchaplik(a)redhat.com>,
> >> > > > > devel(a)linode01.ovirt.org,
> >> > > > > > > > "engine-devel"
<engine-devel(a)ovirt.org>
> >> > > > > > > > > Sent: Sunday, April 6, 2014
11:33:12 AM
> >> > > > > > > > > Subject: Re: [Engine-devel]
vds_dynamic refactor
> >> > > > > > > > >
> >> > > > > > > > > On 04/06/2014 11:32 AM, Liran
Zelkha wrote:
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > On Sun, Apr 6, 2014 at 11:22
AM, Itamar Heim <
> >> > > iheim(a)redhat.com
> >> > > > > > > > > >
<mailto:iheim@redhat.com>> wrote:
> >> > > > > > > > > >
> >> > > > > > > > > > On 04/03/2014 07:51 PM,
Liran Zelkha wrote:
> >> > > > > > > > > >
> >> > > > > > > > > > The problem is with
both updates and selects.
> >> > > > > > > > > > For selects - to get
all the information for
> >> > > > > > > > > > the
> >> > > > > > > > > > VDS
> >> > > we
> >> > > > > have
> >> > > > > > > > > > multiple
> >> > > > > > > > > > joins. Adding another
one will hurt
> >> > > > > > > > > > performance
> >> > > > > > > > > > even
> >> > > > > more.
> >> > > > > > > > > > For updates - we have
vds_static thats hardly
> >> > > changed.
> >> > > > > > > > > > vds_statistics
> >> > > > > > > > > > that changes all the
time. vds_dynamic is not
> >> > > > > > > > > > changed
> >> > > > > allot -
> >> > > > > > > > but
> >> > > > > > > > > > is
> >> > > > > > > > > > updated all the time
because of the status. I
> >> > > > > > > > > > think
> >> > > it's
> >> > > > > best
> >> > > > > > > > to
> >> > > > > > > > > > split
> >> > > > > > > > > > it to the two existing
tables (BTW - relevant
> >> > > > > > > > > > for VM
> >> > > as
> >> > > > > well)
> >> > > > > > > > > >
> >> > > > > > > > > >
> >> > > > > > > > > > but we don't update it
unless the status has
> >> > > > > > > > > > changed,
> >> > > which
> >> > > > > is a
> >> > > > > > > > > > rare occurance?
> >> > > > > > > > > >
> >> > > > > > > > > > Actually - no. We can
definitely see times we are
> >> > > > > > > > > > updating
> >> > > > > > > > > > vds_dynamic
> >> > > > > > > > > > with no reason at all. I tried
to create patches for
> >> > > > > > > > > > that -
> >> > > but
> >> > > > > it
> >> > > > > > > > > > happens from many different
places in the code.
> >> > > > > > > > >
> >> > > > > > > > > what would be updated vds_dyanmic
for status not
> >> > > > > > > > > originating in
> >> > > > > update
> >> > > > > > > > > run time info?
> >> > > > > > > >
> >> > > > > > > > We have separate DB flows for that
(updateStatus and
> >> > > > > > > > updatePartialVdsDynamicCalc and more in
> >> > > VdsDynamicDAODbFacadeImpl).
> >> > > > > > > > A question: do you know if we update
status in
> >> > > > > > > > updateVdsDynamic?
> >> > > :-)
> >> > > > > not
> >> > > > > > > > sure but I found a possible race for
pending resources
> >> > > > > > > > (cpu,
> >> > > mem),
> >> > > > > LOL
> >> > > > > > > > :-)
> >> > > > > > > >
> >> > > > > > > > I think we do but not sure. Will check.
> >> > > > > >
> >> > > > > > Of course it is, that was a rhetorical question
:-) (a lot of
> >> > > emoticons
> >> > > > > and
> >> > > > > > LOLs ;-))
> >> > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > > Still holds my original thought for
having vds_on_boot.
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > Let's talk f2f on Tuesday?
> >> > > > > >
> >> > > > > > I'd prefer to reach conclusions here, I'd
like everyone to be
> >> > > involved
> >> > > > > in a
> >> > > > > > root issue like this one.
> >> > > > >
> >> > > >
> >> > > > What is the update frequency of this field?
> >> > > >
> >> > >
> >> > > which field?
> >> > > status? pending resources? on boot fields?
> >> > > iinm, status is updated mostly by user actions, at least in
positive
> >> > > scenarios, and not that often.
> >> > >
> >> > >
> >> > > > > >
> >> > > > > > >
> >> > > > > > _______________________________________________
> >> > > > > > Devel mailing list
> >> > > > > > Devel(a)ovirt.org
> >> > > > > >
http://lists.ovirt.org/mailman/listinfo/devel
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >
> >
> >
> > _______________________________________________
> > Devel mailing list
> > Devel(a)ovirt.org
> >
http://lists.ovirt.org/mailman/listinfo/devel
> >
> >
>