Home » RDBMS Server » Performance Tuning » Help to tune the query (11.2.0)
Help to tune the query [message #640672] Mon, 03 August 2015 07:15 Go to next message
grpatwari
Messages: 288
Registered: June 2008
Location: Hyderabad
Senior Member
Hi All,

I would like to tune the query as part of my assignment. Please help me to tune this query.

Click here



[EDITED by LF: applied [spoiler] tags]

[Updated on: Thu, 06 August 2015 03:25] by Moderator

Report message to a moderator

Re: Help to tune the query [message #640675 is a reply to message #640672] Mon, 03 August 2015 07:34 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
Your DBA has made one obvious mistake: he has set parallel_degree_policy=AUTO (or perhaps LIMITED) but has not run the
dbms_resource_manager.calibrate_io procedure. A mistake like that makes one wonder if there are oter admin errors.
You will have noticed the cardinality mis-estimate on step 84 of the plan. Has he created a histogram on "SAPFD"."ODF_PARENT_ID" ?

The code itself is weird. Why use a mixture of ASI join syntax (good) and the old Oracle join syntax (bad)?

Are you sure you need those outer joins? Would you actually lose any rows if they were inner joins?

Are you sure you need that DISTINT? Are you actually getting duplicates, if you remove the outer join?

Why have you given the source code for your Z_ODF_PROJECT_V2 view?

[Updated on: Mon, 03 August 2015 07:36]

Report message to a moderator

Re: Help to tune the query [message #640692 is a reply to message #640675] Mon, 03 August 2015 22:41 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
As John demonstrates, there is a lot to take in here. I for one missed the part about calibrate_io; good catch.

Looking at the cardinalities in the plan, you have provided both estimated and actuals which is excellent for us to review. The cardinalities clearly suggest that statistics are way off. I suggest you try the following to learn more.

1. use some cardinality hints to force correct estimates into the optimization process.  Then see if the plan that falls out looks different.  You might even try to run it to see what timing it provides.
2. if you get an acceptable plan, then try collecting statistics again to see if bringing them current fixes the issue so that you do not need the cardinality hints.
3. if necessary try collecting histograms where estimates are still off.
4. then if needed, try dynamic_sampling(4) hint to see if dependence is an issue.

If would seem like you are a beginner at tuning. Is this true. That is OK if you are. Nothing wrong about it. We all started that way. I only ask so we can provide information at a good rate for you to absorb as we go. For example, we need to know if:

1. you understand the difference between estimated and actuals in cardinality
2. you know what dynamic sampling is and what dynamic_sampling(4) will do
3. you know what dependence is
4. you know what histograms are and in what situations they can help and when they cannot


Kevin

[Updated on: Mon, 03 August 2015 22:42]

Report message to a moderator

Re: Help to tune the query [message #640704 is a reply to message #640692] Tue, 04 August 2015 04:50 Go to previous messageGo to next message
grpatwari
Messages: 288
Registered: June 2008
Location: Hyderabad
Senior Member
Thank you for your valuable explanation.

I have observed in my database with the following points. Please advice on this.

1. parallel_degree_policy parameter set as Auto but select * from V$IO_CALIBRATION_STATUS as 'NOT AVAILABLE'
2. SAPFD.ODF_PARENT_ID column have HEIGHT BALANCED histogram
3. Exactly no idea about why they have used a mixture of ANSI join syntax with old Oracle join syntax but they have used to get their requirement
4. Exactly not required the outer joins if we get the output with inner joins
4. Getting duplicalte if not used the Distinct keyword
5. There is no mandatory to keep the Z_ODF_PROJECT_V2 view.I tried without View but one of the column value is wrong and also there is no much improvement in response time

As per my understanding, I am providing the following points
1. The E-Rows (estimated rows) column shows that the optimizer thought it would get x no. of rows. The A-Rows (actual rows) column shows that it got only Y no.of rows
2. Dynamic sampling takes place only at hard parse time and is used to dynamically generate better statistics for the optimizer to use, hence the name dynamic sampling.Dynamic sampling contains total Level 0 to 10. Checked with dynamic sampling hint with level 4 but there is no change in the plan
3. Exactly no idea about Dependency
4. A histogram is a special type of column statistic that provides more detailed information about the data distribution in a table column. Histograms are used for columns that contain data skew (a nonuniform distribution of data within the column), a histogram enables the optimizer to generate accurate cardinality estimates for filter and join predicates that involve these columns
Re: Help to tune the query [message #640716 is a reply to message #640704] Tue, 04 August 2015 07:26 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Thank you for taking the time to answer those questions. That gives us an indication of your experience level.

The first thing you need to do is get a baseline of the query cost. You can do this by running the following create table command.

create table KEVTEMP_BASLINE
nologging
as
SELECT DISTINCT INV.name, 
                INV.id, 
                INV.manager_id                         AS MANAGER_ID, 
                ODF_PR.eri_cfr                         AS CFR, 
                (SELECT Nvl(SUM(bms_past_due), 0) 
                 FROM   odf_ca_sap_bi BILL 
                        inner join odf_ca_sap_fd sapfd 
                                ON BILL.odf_parent_id = sapfd.id 
                        join inv_investments A 
                          ON A.id = sapfd.odf_parent_id 
                        join odf_ca_project B 
                          ON A.id = B.id 
                 WHERE  B.eri_proj_number = ODF_PR.eri_proj_number 
                        AND A.is_active = 1)           AS BP_KPI, 
                ODF_PR.eri_proj_number, 
                REPORT.egct_st_stat_new, 
                ODF_PR.p_ass_cst                       APL_ASSIGN_COST, 
                ODF_PR.p_budget                        APL_BUDGET, 
                ODF_PR.p_ordbkd                        APL_ORDERS_BKD, 
                ODF_PR.um_pct                          APL_UGM_PCT, 
                Round(ODF_PR.ass_prj_nc_um_pct, 2)     ERI_CALC_TPR_UM_PCT, 
                -- DECODE(sapfd.kpi_epcq,'G',0,NULL,NULL,1) KPI_1_STATUSKEYCOLOR, 
                (SELECT Max(CASE 
                              WHEN sapfd.kpi_epcq IS NULL THEN 0 
                              WHEN sapfd.kpi_epcq = 'G' THEN 1 
                              ELSE 2 
                            END) 
                 FROM   odf_ca_sap_fd sapfd 
                        inner join odf_ca_project A 
                                ON sapfd.odf_parent_id = a.id 
                        join inv_investments B 
                          ON A.id = B.id 
                 WHERE  A.eri_proj_number = ODF_PR.eri_proj_number 
                        --AND A.ERI_PRJ_LEVEL    != '1' 
                        AND sapfd.ra_key != 'ZPS006' 
                        AND sapfd.wbs_status = 'REL' 
                        AND B.is_active = 1)           WBS_KPI, 
                Decode(ODF_PR.kpi_3_statuskeycolor, 'G', 1, 
                                                    'R', 2, 
                                                    'Y', 2, 
                                                    'W', 1, 
                                                    0) KPI_3_STATUSKEYCOLOR, 
                ODF_PR.eri_proj_number, 
                ODF_PR.p_currency_code                 EGCT_PROJ_CURRENCY, 
                (SELECT full_name 
                 FROM   srm_resources 
                 WHERE  user_id = INV.manager_id)      FULL_NAME, 
                CASE 
                  WHEN INV.stage_code IN ( 'PROPS_C_TG1_APRVD', 'PROPS_C_TG1_OPT_OUT' ) THEN 'TG1' 
                  WHEN INV.stage_code = 'PROPS_C_TG2_APRVD' THEN 'TG2' 
                  WHEN INV.stage_code = 'PROPS_C_TG3_APRVD' THEN 'TG3' 
                  WHEN INV.stage_code = 'PROPS_C_TG4_APRVD' THEN 'TG4' 
                  WHEN INV.stage_code = 'PROPS_C_TG5_APRVD' THEN 'TG5' 
                  WHEN INV.stage_code = 'PROPS_C_MS6_APRVD' THEN 'MS6' 
                  ELSE '' 
                END                                    LATEST_APPROVED_TOLLGATE, 
                (SELECT Max(Decode(priority_code, 'LOW', 1, 
                                                  'MEDIUM', 2, 
                                                  'HIGH', 3, 
                                                  0)) 
                 FROM   rim_risks_and_issues 
                 WHERE  table_name = 'SRM_PROJECTS' 
                        AND type_code = 'ISSUE' 
                        AND status_code != 'CLOSED' 
                        AND pk_id = INV.id)            AS MAX_ISSUE, 
                (SELECT Max(probability_enum * impact_enum) 
                 FROM   rim_risks_and_issues 
                 WHERE  table_name = 'SRM_PROJECTS' 
                        AND type_code = 'RISK' 
                        AND status_code != 'CLOSED' 
                        AND pk_id = INV.id)            AS MAX_RISK 
FROM   odf_project_v2 ODF_PR 
       join inv_investments INV
          ON INV.id = ODF_PR.odf_pk 
       inner join odf_ca_sap_fd sapfd
          ON sapfd.odf_parent_id = inv.id 
       left outer join (SELECT REP.odf_parent_id PROJECTID, 
                               REP.egct_st_stat_new 
                        FROM   odf_ca_project ODF 
                               join odf_ca_catsprjstatusrep REP 
                                 ON ODF.id = REP.odf_parent_id 
                               join (SELECT ST2.odf_parent_id, 
                                            Max(ST2.id) ST_ID 
                                     FROM   (SELECT Max(report_date) AS rep_date, 
                                                    odf_parent_id 
                                             FROM   odf_ca_catsprjstatusrep rep, 
                                                    inv_investments proj 
                                             WHERE  rep.odf_parent_id = proj.id 
                                                    AND rep.created_by = proj.manager_id 
                                             GROUP  BY odf_parent_id) st1, 
                                            odf_ca_catsprjstatusrep st2, 
                                            inv_investments inv 
                                     WHERE  st1.rep_date = st2.report_date 
                                            AND st1.odf_parent_id = st2.odf_parent_id 
                                            AND st2.odf_parent_id = inv.id 
                                            AND st2.created_by = inv.manager_id 
                                     GROUP  BY st2.odf_parent_id) MAXREP 
                                 ON MAXREP.st_id = REP.id 
                                    AND MAXREP.odf_parent_id = REP.odf_parent_id 
                        WHERE  ODF.id IN &project_id) REPORT 
          ON REPORT.projectid = INV.id 
WHERE  INV.is_active = 1 
       AND inv.id IN &project_id
/


By creating this table, you materialize the entire result set, but you do not push it across the network to the client. This tells you if the problem is really in your query, or if it is somewhere else in the network or client side of things. You need to do this or else you could easily end up wasting your time in the wrong place. Assuming this create table command stills takes long time to complete, and thus you have shown that the query itself is the problem, your next step is to simplify the query to make it easier to work with.

Do you know what QUERY DECOMPOSITION AND RECONSTRUCTION is?

It is the process of removing parts of the query to make a simpler form of it so that you can isolate the location of the query's performance problem faster and with less effort. Notice what I have done with the query here.

create table KEVTEMP_SIMPLIFIED_1
nologging
as
SELECT DISTINCT INV.name, 
                INV.id, 
                INV.manager_id                         AS MANAGER_ID, 
                ODF_PR.eri_cfr                         AS CFR, 
/*
                (SELECT Nvl(SUM(bms_past_due), 0) 
                 FROM   odf_ca_sap_bi BILL 
                        inner join odf_ca_sap_fd sapfd 
                                ON BILL.odf_parent_id = sapfd.id 
                        join inv_investments A 
                          ON A.id = sapfd.odf_parent_id 
                        join odf_ca_project B 
                          ON A.id = B.id 
                 WHERE  B.eri_proj_number = ODF_PR.eri_proj_number 
                        AND A.is_active = 1)           AS BP_KPI, 
*/
                ODF_PR.eri_proj_number, 
                REPORT.egct_st_stat_new, 
                ODF_PR.p_ass_cst                       APL_ASSIGN_COST, 
                ODF_PR.p_budget                        APL_BUDGET, 
                ODF_PR.p_ordbkd                        APL_ORDERS_BKD, 
                ODF_PR.um_pct                          APL_UGM_PCT, 
                Round(ODF_PR.ass_prj_nc_um_pct, 2)     ERI_CALC_TPR_UM_PCT, 
                -- DECODE(sapfd.kpi_epcq,'G',0,NULL,NULL,1) KPI_1_STATUSKEYCOLOR, 
/*
                (SELECT Max(CASE 
                              WHEN sapfd.kpi_epcq IS NULL THEN 0 
                              WHEN sapfd.kpi_epcq = 'G' THEN 1 
                              ELSE 2 
                            END) 
                 FROM   odf_ca_sap_fd sapfd 
                        inner join odf_ca_project A 
                                ON sapfd.odf_parent_id = a.id 
                        join inv_investments B 
                          ON A.id = B.id 
                 WHERE  A.eri_proj_number = ODF_PR.eri_proj_number 
                        --AND A.ERI_PRJ_LEVEL    != '1' 
                        AND sapfd.ra_key != 'ZPS006' 
                        AND sapfd.wbs_status = 'REL' 
                        AND B.is_active = 1)           WBS_KPI, 
*/
                Decode(ODF_PR.kpi_3_statuskeycolor, 'G', 1, 
                                                    'R', 2, 
                                                    'Y', 2, 
                                                    'W', 1, 
                                                    0) KPI_3_STATUSKEYCOLOR, 
                ODF_PR.eri_proj_number, 
                ODF_PR.p_currency_code                 EGCT_PROJ_CURRENCY, 
/*
                (SELECT full_name 
                 FROM   srm_resources 
                 WHERE  user_id = INV.manager_id)      FULL_NAME, 
                CASE 
                  WHEN INV.stage_code IN ( 'PROPS_C_TG1_APRVD', 'PROPS_C_TG1_OPT_OUT' ) THEN 'TG1' 
                  WHEN INV.stage_code = 'PROPS_C_TG2_APRVD' THEN 'TG2' 
                  WHEN INV.stage_code = 'PROPS_C_TG3_APRVD' THEN 'TG3' 
                  WHEN INV.stage_code = 'PROPS_C_TG4_APRVD' THEN 'TG4' 
                  WHEN INV.stage_code = 'PROPS_C_TG5_APRVD' THEN 'TG5' 
                  WHEN INV.stage_code = 'PROPS_C_MS6_APRVD' THEN 'MS6' 
                  ELSE '' 
                END                                    LATEST_APPROVED_TOLLGATE, 
*/
/*
                (SELECT Max(Decode(priority_code, 'LOW', 1, 
                                                  'MEDIUM', 2, 
                                                  'HIGH', 3, 
                                                  0)) 
                 FROM   rim_risks_and_issues 
                 WHERE  table_name = 'SRM_PROJECTS' 
                        AND type_code = 'ISSUE' 
                        AND status_code != 'CLOSED' 
                        AND pk_id = INV.id)            AS MAX_ISSUE, 
                (SELECT Max(probability_enum * impact_enum) 
                 FROM   rim_risks_and_issues 
                 WHERE  table_name = 'SRM_PROJECTS' 
                        AND type_code = 'RISK' 
                        AND status_code != 'CLOSED' 
                        AND pk_id = INV.id)            AS MAX_RISK 
*/
FROM   odf_project_v2 ODF_PR 
       join inv_investments INV
          ON INV.id = ODF_PR.odf_pk 
       inner join odf_ca_sap_fd sapfd
          ON sapfd.odf_parent_id = inv.id 
       left outer join (SELECT REP.odf_parent_id PROJECTID, 
                               REP.egct_st_stat_new 
                        FROM   odf_ca_project ODF 
                               join odf_ca_catsprjstatusrep REP 
                                 ON ODF.id = REP.odf_parent_id 
                               join (SELECT ST2.odf_parent_id, 
                                            Max(ST2.id) ST_ID 
                                     FROM   (SELECT Max(report_date) AS rep_date, 
                                                    odf_parent_id 
                                             FROM   odf_ca_catsprjstatusrep rep, 
                                                    inv_investments proj 
                                             WHERE  rep.odf_parent_id = proj.id 
                                                    AND rep.created_by = proj.manager_id 
                                             GROUP  BY odf_parent_id) st1, 
                                            odf_ca_catsprjstatusrep st2, 
                                            inv_investments inv 
                                     WHERE  st1.rep_date = st2.report_date 
                                            AND st1.odf_parent_id = st2.odf_parent_id 
                                            AND st2.odf_parent_id = inv.id 
                                            AND st2.created_by = inv.manager_id 
                                     GROUP  BY st2.odf_parent_id) MAXREP 
                                 ON MAXREP.st_id = REP.id 
                                    AND MAXREP.odf_parent_id = REP.odf_parent_id 
                        WHERE  ODF.id IN &project_id) REPORT 
          ON REPORT.projectid = INV.id 
WHERE  INV.is_active = 1 
       AND inv.id IN &project_id
/


Notice please how I have removed the SCALAR SUBQUERIES from problem query. Executing this create table command will allow you to get a runtime that will tell you if your problem sits in one of the SCALAR SUBQUERIES, or if it sits in your main query. If it is in the scalars, then you can add them back one at a time to see which scalars add significantly to the runtime and thus need tuning. If it is in the main query, then you can continue to use this simplified version of the query which will make looking at query plans easier, and doing further testing faster.

So generate new estimates and actuals and a runtime, for this create table command. Once you do that and post the new QEP and runtime, we can talk about using CARDINALITY HINTS to check the validity of the query plan using corrected row counts. But first, do the two create table command above and post the new results.

You understand how this helps right?

Also, you have not told us how long the query takes to run. Is it 10 seconds, 10 minutes, or 10 hours? or what? Looking at the QEP, it says the query takes only .14 seconds. If this is so, then why do you consider that slow?

Kevin

[Updated on: Tue, 04 August 2015 07:34]

Report message to a moderator

Re: Help to tune the query [message #640717 is a reply to message #640704] Tue, 04 August 2015 07:47 Go to previous messageGo to next message
John Watson
Messages: 8922
Registered: January 2010
Location: Global Village
Senior Member
To repeat: tell your DBA to calibrate the IO. If he doesn't know how, tell him that he should
not fiddle with things that he doesn't understand (and then get a new DBA).

Also, it looks to me as though he has set CURSOR_SHARING=FORCE. Can you run the query in
a session where you have it set to exact? It would be interesting to see if you get a different plan
when the literals are exposed to the CBO.

Finally, what release of Oracle are you using? You say only 11.2.0. If it is actually 11.2.0.4, you could
run the query with OPTIMIZER_DYNAMIC_SAMPLING=11.
Re: Help to tune the query [message #640777 is a reply to message #640717] Wed, 05 August 2015 01:15 Go to previous messageGo to next message
grpatwari
Messages: 288
Registered: June 2008
Location: Hyderabad
Senior Member
Sorry John...They are not running calibrate_io procedure.

I am using Oracle 11.2.0.3.0 version.

Created both the tables successfully without any intervention. Please find the below plans.

select * /*+GATHER_PLAN_STATISTICS*/ /*mainquery*/ from KEVTEMP_BASLINE
 
Plan hash value: 1289730321
 
-------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                 |      1 |        |      6 |00:00:00.01 |       2 |
|   1 |  TABLE ACCESS STORAGE FULL| KEVTEMP_BASLINE |      1 |      6 |      6 |00:00:00.01 |       2 |
-------------------------------------------------------------------------------------------------------
 
Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing

select * /*+GATHER_PLAN_STATISTICS*/ /*afterremovingsubquery*/ from 
KEVTEMP_SIMPLIFIED_1
 
Plan hash value: 4169945393
 
------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name                 | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                      |      1 |        |      6 |00:00:00.01 |       2 |
|   1 |  TABLE ACCESS STORAGE FULL| KEVTEMP_SIMPLIFIED_1 |      1 |      6 |      6 |00:00:00.01 |       2 |
------------------------------------------------------------------------------------------------------------
 
Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing


I have no idea about QUERY DECOMPOSITION AND RECONSTRUCTION. The query is taking .90 seconds to 1 min.

Please let me know I have missed anything.
Re: Help to tune the query [message #640819 is a reply to message #640777] Wed, 05 August 2015 08:25 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
OK so this was my mistake. I was not clear enough about what I wanted. You will have to do the two create table commands and dump their QEP. We don't need to see "select from kevtemp...". We need to see the plan for the create table commands when they are executed.

create table KEVTEMP_BASLINE
nologging
as
SELECT /*+ gather_plan_statistics */ DISTINCT INV.name, 
                INV.id, 
                INV.manager_id                         AS MANAGER_ID, 
                ODF_PR.eri_cfr                         AS CFR, 


dump the plan.

create table KEVTEMP_SIMPLIFIED_1
nologging
as
SELECT /*+ gather_plan_statistics */ DISTINCT INV.name, 
                INV.id, 
                INV.manager_id                         AS MANAGER_ID, 
                ODF_PR.eri_cfr                         AS CFR, 
/*
                (SELECT Nvl(SUM(bms_past_due), 0) 


dump the plan.

See how we want the plan here? Please do this again and post the plans.

Kevin
Re: Help to tune the query [message #640862 is a reply to message #640819] Thu, 06 August 2015 03:21 Go to previous messageGo to next message
grpatwari
Messages: 288
Registered: June 2008
Location: Hyderabad
Senior Member
Thank you very much Kevin for your patience and support.

As suggested, I have taken the plan. Please find the plan as below

Click here



[EDITED by LF: applied [spoiler] tags]

[Updated on: Thu, 06 August 2015 03:26] by Moderator

Report message to a moderator

Re: Help to tune the query [message #640888 is a reply to message #640862] Thu, 06 August 2015 08:14 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Thanks. Getting better but still needs a little more. Please notice the following:

1. you can see how the second query plan does not contain any of the stuff related to the scalar subqueries that the first plan has.  The smaller query and associated plan is easier to work with in several ways.

2. unfortunately you did not provide us the wallclock runtime for each of the two create table commands.  If you had then we cold have done two things: 1) we could have compared them to see the difference between them which would have told us how much time the scalars were adding to the query, and 2) we could have compared them to the original query to see how much time everything other than the query itself (network usage, client side, etc.) was adding to the overall execution.  So please post the runtimes (labeled) for each of the three queries (original going to the client / create table baseline / create table no scalars).

3. next please note that all rowcounts are still off, but there is one that is way off.  In the no scalar's QEP, it is line #59 where the actuals are off by about two orders of magnitude (55X in this case) from the estimated.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                                  | Name                           | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Writes |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|* 59 |     INDEX RANGE SCAN                                       | ODF_CA_SAP_FD_U1               |      6 |    189 |  10975 |00:00:00.01 |      63 |      0 |       |       |          |

Predicate Information (identified by operation id):
---------------------------------------------------
  59 - access("SAPFD"."ODF_PARENT_ID"="INV_INVESTMENTS"."ID")
       filter(("SAPFD"."ODF_PARENT_ID"=5152092 OR "SAPFD"."ODF_PARENT_ID"=5429087 OR "SAPFD"."ODF_PARENT_ID"=5440010 OR "SAPFD"."ODF_PARENT_ID"=5446053 OR 
              "SAPFD"."ODF_PARENT_ID"=5460011 OR "SAPFD"."ODF_PARENT_ID"=5680076))

4. Also, I note that the query plans are telling us that in the end, your query is returing 0 rows.  This makes suspect the data set you are using to do testing with.  Is this the correct answer?  How many rows are actually in the the tables you created?  Maybe it is just fluke of the way the QEPs were written.  I am guessin there are actually six rows in the result?  But please verify this for us.

So please provide the additional info I have requested. And please tell us if you are testing using a validate set of data.

Kevin
Re: Help to tune the query [message #640954 is a reply to message #640888] Fri, 07 August 2015 04:09 Go to previous messageGo to next message
grpatwari
Messages: 288
Registered: June 2008
Location: Hyderabad
Senior Member
Thank you very much.Please find the details as below.
select LAST_LOAD_TIME, ELAPSED_TIME, MODULE, SQL_TEXT elasped from v$sql
where upper(sql_text) like '%KEVTEMP_BASLINE%'
  order by LAST_LOAD_TIME desc; --

Original query Elapsed Time: 7453016
KEVTEMP_BASLINE Elapsed Time: 1062233
Elapsed Time for KEVTEMP_SIMPLIFIED_1: 688628


Yes. It is returing 6 rows when I provide the input with 6 project Ids.
Re: Help to tune the query [message #640972 is a reply to message #640954] Fri, 07 August 2015 08:56 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Excellent thanks. So the elapsed times you are showing are from v$sql which means they need to be divided by 1 million in order to get seconds. That means your query is finishing in 7.5 seconds. Thus we have this:

Original query Elapsed Time: 7.5 seconds  (get data, send it across the network to a client app)
KEVTEMP_BASLINE Elapsed Time: 1.1 seconds (get data, save it in a table)
Elapsed Time for KEVTEMP_SIMPLIFIED_1: 0.7 seconds (get data NO SCALARS, save it in a table)


So...

Breaking it down we can now see where % of work goes to this query:

main query (.7)/7.5=9%
scalar sub-queries (1.1-.7)/7.5=6%
network traffic and client processing if any (7.5-1.1)/7.5=85%


Given this approximate breakdown of costs we have determined using some simple query decomposition and reconstruction, where should you be spending your tuning time?

Seems very odd to me though that it would take 6.4 seconds to push just 6 rows across the network. Is there something you are leaving out of the problem you have showed us?

Kevin

[Updated on: Fri, 07 August 2015 08:57]

Report message to a moderator

Previous Topic: Cost Issue while using row_number()
Next Topic: query with hundreds of bind variables
Goto Forum:
  


Current Time: Thu Mar 28 16:22:52 CDT 2024