SQL efficiency

Bad plan or something else?

The first step in any tuning activity is determining the scope of an issue — so if a query is not performing satisfactory, then the first question is whether it’s query’s execution plan or something else (e.g. a global database issue or even a problem external to the database). Full diagnostics may be time-consuming or even inaccessible, so it’s desirable to resolve this question by just looking at the query and its basic statistics. There is a method of doing that using SQL efficiency, and in this post I’m going to describe it.

The method is not my invention. I read about it in Christian Antognini’s book “Troubleshooting Oracle Performance”, but in fact I was using it in a slightly different form long before.

Quantitative measure of SQL efficiency

The quantity used by Mr. Antognini is logical I/O (LIO) per row returned. E.g. if a query needs 500 logical reads to return 50 rows, then the ratio is 10. The larger the ratio is, the less efficient the SQL statement is.

Based on this quantity, Mr. Antognini considers a query efficient if it involves 5 logical I/O’s per row or less. I think the logic behind this is that 5 logical reads is close to a typical cost of an index access, so if a query is showing 5 LIO per row, this suggests that the query is dominated by selective index access paths.

The advantages of this quantity are:

1) it’s very easy to obtain — e.g. both LIO and rows processed can be found in autotrace output
2) it’s independent of short-term factors like buffer cache efficiency
3) unlike response time, it doesn’t contain contributions from other layers (network, application etc.).

Obviously, this approach doesn’t work with aggregation, so great care must be taken when using it with queries involving GROUP BY, MAX, MIN etc. (it can only be used for the part up to the aggregation operation in the plan). This definition of SQL efficiency also doesn’t take into account some other factors, like multiblock read efficiency or caching effects, but even so, it still provides a great way to quickly determine whether or not a SQL statement is a good tuning target.

Sources of inefficiency

Most often, inefficiency in SQL queries is either a result of inability to eliminate unneeded rows as early as possible, which is caused by optimizer’s inability to correctly predict the amount of data that satisfies query predicates (inaccurate or non-representative statistics or optimizer limitations/bugs). But there are also other possibilities.

Some typical scenarios are:

1) Table and index full scans where most of the rows are rejected by filter predicates
2) Index scan followed by most rows rejected by the filter predicate in table access by rowid
3) Wrong join order (accessing many rows that are going to be later rejected by join or filter predicates on matched rows from the joined table) or less often, wrong join method.
4) Subqueries that weren’t unnested where appropriate
5) Using index range/unique/skip scan to access a table non-selectively.

Examples

Example 1. Inefficient full table scan

In this example, the plan shows that in order to retrieve just 1 row, 164 reads are made, so it’s inefficient. A glance on the predicate section suggests that the reason for that is a TO_CHAR conversion, so even if an index on X exists, it cannot be used.


------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     85 |      1 |00:00:00.01 |     164 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(TO_CHAR("X")=TO_CHAR(:B1))
Predicate Information (identified by operation id):
---------------------------------------------------

 1 - filter(TO_CHAR("X")=TO_CHAR(:B1))

Example 2. Inefficient index access

In this example, 181 logical reads are made to produce 1 row because of a disparity between a very lose access predicate and a very selective filter predicate. If the index included both columns, then y = 5 would go from filter predicate to access predicate, eliminating the need to access the table at all, and improving query’s efficiency by an order of magnitude.

SQL_ID  g4tpq11t3b7wh, child number 0
-------------------------------------
select *  from t2 where x<=:b1 and y = 5

Plan hash value: 3298260838

---------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------
|*  1 |  TABLE ACCESS BY INDEX ROWID| T2     |      1 |      1 |      1 |00:00:00.01 |     181 |     19 |
|*  2 |   INDEX RANGE SCAN          | I$T2$X |      1 |     90 |  10000 |00:00:00.01 |      22 |     19 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("Y"=5)
   2 - access("X"<=:B1)

Note
-----
   - dynamic sampling used for this statement

Example 3

Here, approximately 16k reads are needed to produce 1k rows, so the efficiency ratio is about 16. The inefficiency is coming from the wrong join order, where the bigger row source is taken as the driving table (from the query text we see that the wrong order is dictated by the hint — such cases can be seen in real life, when the code that was meant for one data distribution, is used for another).

SQL_ID  3ya6zyvx8u0kq, child number 0
-------------------------------------
select --+ leading(b) * from t3a a,          t3b b where a.id = b.id and a.filter = 1

Plan hash value: 242417301

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|*  1 |  HASH JOIN                   |              |      1 |   1000 |   1000 |00:00:09.27 |   15301 |  16155 |   1110 |   103M|  7230K|   97M (1)|   10240 |
|   2 |   TABLE ACCESS FULL          | T3B          |      1 |  85089 |    100K|00:00:00.20 |   14297 |  14069 |      0 |       |       |          |         |
|   3 |   TABLE ACCESS BY INDEX ROWID| T3A          |      1 |   1000 |   1000 |00:00:00.02 |    1004 |    976 |      0 |       |       |          |         |
|*  4 |    INDEX RANGE SCAN          | I$T3A$FILTER |      1 |   1000 |   1000 |00:00:00.01 |       4 |      0 |      0 |       |       |          |         |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("A"."ID"="B"."ID")
   4 - access("A"."FILTER"=1)

Note
-----
   - dynamic sampling used for this statement

Example 4. Failed query transform

In this example, 821K reads are required to acquire just 1 row. The inefficiency is stemming from optimizer’s inability to convert the query using an antijoin.

In this example, 813K reads are required to retrieve 1K rows, i.e. SQL efficiency ratio is 813. This is a result of optimizer's failure to rewrite the query as an antijoin.

SQL_ID  2ypuk3z2hd7nj, child number 0
-------------------------------------
select * from t4a where id not in (select id from t4b)

Plan hash value: 3192681937

----------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------
|*  1 |  FILTER            |      |      1 |        |   1000 |00:00:09.82 |     813K|     68 |
|   2 |   TABLE ACCESS FULL| T4A  |      1 |  11159 |  10000 |00:00:00.01 |     161 |     39 |
|*  3 |   TABLE ACCESS FULL| T4B  |  10000 |   8564 |   9000 |00:00:09.78 |     813K|     29 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter( IS NULL)
   3 - filter(LNNVL("ID"<>:B1))

Note
-----
   - dynamic sampling used for this statement

An explicit rewrite of the query as an antijoin

select * from t4a a, t4b b where a.id = b.id(+) and b.id is null

would fix the problem and provide a much better efficiency (in this particular example, 0.3, i.e. over 3 orders of magnitude better).

2 thoughts on “SQL efficiency”

    1. Hi Sybrand,

      thanks for visiting my blog. Yes, that’s one way to picture it — LIO/row = 5 suggests a plan dominated by index unique/range scans.

      Another interesting point is that one can very easily have LIO/row << 5. But more often than not, it doesn't indicate an extraordinarily efficient SQL plan: rather, it suggests that the schema is probably not normalized properly (everything dumped into a single table).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s