Troubleshooting relatively short plans is simple, once one gets a little practice, but the mere sight of a plan that has several tens or even hundred operations can be very initimidating. In this post I am describing a few tricks that make this task easier.
First of all, use V$SQL_PLAN/V$SQL_PLAN_STATISTICS instead of (or in addition to) the formatted plan produced by DBMS_XPLAN
For example, in DBMS_XPLAN output for a long plan it is hard to figure out relationships between operations, but it is very straightforward to figure them out by querying V$SQL_PLAN, e.g.:
select * from v$sql_plan p where sql_id = :sql_id and id = :op_id and P.PARENT_ID = :operation_id;
If the statement is a join of several views, and the optimizer doesn’t merge the views, then it is convenient to check which views are contributing to the performance problem the most, e.g.:
select object_name, s.last_cr_buffer_gets from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id and p.child_number = :child_no and p.operation = 'VIEW' order by S.LAST_CR_BUFFER_GETS desc;
In absence of merging, each view (including inline views) can be tuned separately (of course if there is a reason to believe that a merged view would perform better, then efforts should be focused on making it mergeable, and not on tuning it in isolation).
It is also convenient to use V$SQL_PLAN_STATISTICS to determine which “elementary” operations, such as table, index or material view access, takes most time (gets, reads etc.):
select object_name, s.last_cr_buffer_gets from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id and p.child_number = :child_no and p.operation in ('TABLE ACCESS', 'INDEX', 'MAT_VIEW ACCESS') order by S.LAST_CR_BUFFER_GETS desc;
If the statement accesses same tables/indexes several times, then the approach can be taken one step further: you can group the plan statistics by object name to determine which objects contribute most:
select object_name, sum(s.last_cr_buffer_gets) total_gets, count(*) num_times_used from v$sql_plan p, v$sql_plan_statistics s where p.sql_id = s.sql_id and p.child_number = s.child_number and P.ID = s.operation_id and p.sql_id = :sql_id group by object_name order by sum(S.LAST_CR_BUFFER_GETS) desc;
It could also help to left-join the query above to DBA_SEGMENTS to compare the number of gets from an object to its total number of blocks. If the number of gets exceeds the total size of the object, then most likely it’s being accessed inefficiently. In such cases it could help to use the refactoring clause WITH, or the entire query logic needs to be evaluated.
For example, once I saw a query that compiled a financial report from many different views. It wasn’t performing well, but the plan (with about 700 operations in it) didn’t show any obvious problem. However, using this grouping trick I found that one small table was accessed about 40 times, and the total number of gets associated with it was 20 times the size of the table. The table contained FX conversion rates, and as it turned out, each of many sections of the report was performing this conversion independently, resulting in a significant performance overhead.
These simple tricks go well with traditional divide-and-conquer methods, and/or the data access efficiency analysis described in my previous post.