Troubleshooting relatively short plans is simple, once one gets a little practice, but the mere sight of a plan that has several tens or even hundred operations can be very initimidating. In this post I am describing a few tricks that make this task easier.
First of all, use V$SQL_PLAN/V$SQL_PLAN_STATISTICS instead of (or in addition to) the formatted plan produced by DBMS_XPLAN
For example, in DBMS_XPLAN output for a long plan it is hard to figure out relationships between operations, but it is very straightforward to figure them out by querying V$SQL_PLAN, e.g.:
select * from v$sql_plan p where sql_id = :sql_id and id = :op_id and P.PARENT_ID = :operation_id;
If the statement is a join of several views, and the optimizer doesn’t merge the views, then it is convenient to check which views are contributing to the performance problem the most, e.g.:
select object_name, s.last_cr_buffer_gets from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id and p.child_number = :child_no and p.operation = 'VIEW' order by S.LAST_CR_BUFFER_GETS desc;
In absence of merging, each view (including inline views) can be tuned separately (of course if there is a reason to believe that a merged view would perform better, then efforts should be focused on making it mergeable, and not on tuning it in isolation).
It is also convenient to use V$SQL_PLAN_STATISTICS to determine which “elementary” operations, such as table, index or material view access, takes most time (gets, reads etc.):
select object_name, s.last_cr_buffer_gets from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id and p.child_number = :child_no and p.operation in ('TABLE ACCESS', 'INDEX', 'MAT_VIEW ACCESS') order by S.LAST_CR_BUFFER_GETS desc;
If the statement accesses same tables/indexes several times, then the approach can be taken one step further: you can group the plan statistics by object name to determine which objects contribute most:
select object_name, sum(s.last_cr_buffer_gets) total_gets, count(*) num_times_used from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id group by object_name order by sum(S.LAST_CR_BUFFER_GETS) desc;
It could also help to left-join the query above to DBA_SEGMENTS to compare the number of gets from an object to its total number of blocks. If the number of gets exceeds the total size of the object, then most likely it’s being accessed inefficiently. In such cases it could help to use the refactoring clause WITH, or the entire query logic needs to be evaluated.
For example, once I saw a query that compiled a financial report from many different views. It wasn’t performing well, but the plan (with about 700 operations in it) didn’t show any obvious problem. However, using this grouping trick I found that one small table was accessed about 40 times, and the total number of gets associated with it was 20 times the size of the table. The table contained FX conversion rates, and as it turned out, each of many sections of the report was performing this conversion independently, resulting in a significant performance overhead.
These simple tricks go well with traditional divide-and-conquer methods, and/or the data access efficiency analysis described in my previous post.
Really Nice but please give one example if possible
Hi Nilesh,
point taken, I’ll try to find time to prepare an example.
With 12c arround, days are gone when DBA used to spend sleepless nights tuning a particular sql.
These days,tuning sql is limited to two facts:
1)Check statistics are upto date or not
2)Check index is appropriate and is present or not.
I have seen complex sqls problems leading to above one of the two solutions.
I think this quote from Jonathan Lewis is probably the best answer to that claim:
“When Oracle 7 was launched I thought “That makes it so easy to do things right, I’d better start looking for a new job.”
When Oracle 8 was launched I thought “There’s no way that people will be producing garbage any more, I’d better start looking for a new job.”
…
When Oracle launched Exadata I thought – “Another new technology, another opportunity for people to do it wrong.” ***
“
This is what Oracle is doing. Adding so many new features and making the technology so robust that most of the things that we use to care while writing sql are no longer necessarily require attention.
For example, the use of In and exists clause. Oracle use to interpret them differently under different circumstances and care use to be taken which to use when but these days, Oracle is smart enough and lists the same execution plan.