Global Hints

Occasionally I encounter a situation when I need to affect a part of the plan that corresponds to a view, e.g.:

select *
from
(
   select v.x, x.y
   from v
) q
where q.x = 1

Such situations are resolved using global hints. Oracle offers two ways to specify a global hint: via a query block identifier (system generated or user defined) or via view aliases. System-generated query block identifiers can be obtained via dbms_xplan.display with ALL or ALIAS option (they have the form SEL$n, where n appears to be same as the depth, e.g. in our case 1 corresponds to the main query, 2 to the inline view, 3 to the view V inside that inline view) or defined by the user via qb_name hint.

Continue reading “Global Hints”

Tuning Analytic Functions

In general, tuning analytic functions (and more generally, all sort operations) is rather difficult. While for most poorly performing queries it’s relatively straightforward to gain some improvements  by applying “eliminate early” principle one way or another, for slow sort operations it’s rarely applicable. Usually options are limiting to rewriting a query without analytics (e.g. using self-joins or correlated subqueries to achieve the same goal) or manually resizing the workarea to reduce/eliminate the use of disk. Recently, however, I had a case where I managed to obtain an excellent performance gain using a different technique that I would like to share in this post.

The original query was selecting about 100 columns using the LAG function on one of the columns in the WHERE clause, but in my test case I’ll both simplify and generalize the situation. Let’s create a table with a sequential id, three filtering columns x, y and z, and 20 sufficiently lengthy columns.

Continue reading “Tuning Analytic Functions”

“Snapshot too old” as a performance issue in disguise

There exists a fair amount of confusion around one of the most famous Oracle errors — ORA-01555 (“snapshot too old”). The difficulty is probably coming from the fact that ORA-01555 is related to internal workings of undo generation and usage in an Oracle database, which a rather complex. Because of that, when this error on their system, inexperienced DBAs and developers often either start looking at UNDO-related parameters, rollback segment sizes, etc., or start looking for any suspicious activity, which they later link to excessive undo “consumption” or “generation”.

Continue reading ““Snapshot too old” as a performance issue in disguise”

Positive feedback loops

Ever since I heard Tanel Poder talk about application servers causing positive feedback loops (performance gets worse -> open more sessions -> make performance even worse) in Riga 2012 LVOUG conference, I was curios to see one myself. And recently I got a chance to see a very interesting variety of such behavior.

positive_feedback_loop_example

On the plot above you can see database CPU usage (red line) versus time, plotted together with the number of sessions running same SQL statement (identically the same, i.e. not even different binds — there are no binds in this statement). There is a clear correlation between the two.   Apparently, when database performance degrades to a certain point, the application server is not getting the answer fast enough, and launches the query again in another session, thus degrading performance even more, etc.

This behavior  is very dangerous and may easily cause prolonged outages (see the plateau towards the right side of the plot — the database server was maxed out in terms of CPU for over 24 hours!).

Moral of the story: don’t spawn more sessions in response to poor performance, unless you want poor performance to quickly become terrible!

Tuning very complex SQL

Troubleshooting relatively short plans is simple, once one gets a little practice, but the mere sight of a plan that has several tens or even hundred operations can be very initimidating. In this post I am describing a few tricks that make this task easier.

First of all, use V$SQL_PLAN/V$SQL_PLAN_STATISTICS instead of (or in addition to) the formatted plan produced by DBMS_XPLAN

For example, in DBMS_XPLAN output for a long plan it is hard to figure out relationships between operations, but it is very straightforward to figure them out by querying V$SQL_PLAN, e.g.:


select *
from v$sql_plan p
where sql_id = :sql_id
and id = :op_id
and P.PARENT_ID = :operation_id;

If the statement is a join of several views, and the optimizer doesn’t merge the views, then it is convenient to check which views are contributing to the performance problem the most, e.g.:

select  object_name, s.last_cr_buffer_gets
from v$sql_plan p,
     v$sql_plan_statistics s
where p.sql_id = s.sql_id
and p.child_number = s.child_number
and P.ID = s.operation_id
and p.sql_id = :sql_id
and p.child_number = :child_no
and p.operation = 'VIEW'
order by S.LAST_CR_BUFFER_GETS desc;

Continue reading “Tuning very complex SQL”

Efficiency-based SQL tuning

Introduction

In order to tune a query, you need to know two things:
– can it be tuned, and if yes, then by how much
– which part of the query (which operation, or which data object) is most promising from the tuning point of view.

Currently existing tuning methods don’t really answer these questions. You either focus on the most expensive operation(s), and hope that you can eliminate them (or transform them into something less cosly), or you focus on the ones where you see a large discrepancy between actual rowcounts and optimizer predictions (cardinality feedback tuning). Either way, you can’t be sure that you’ve set your priorities right. It could well be the case that the cost of the most expensive operation cannot be reduced by much, but you can win back enough performance elsewhere. With the cardinality feedback tuning, you also don’t have any guarantee that improving accuracy of optimizer estimates would eventually transform into acceptable level of performance.

Of course, if the plan only contains a few operations, this is not a big issue, and after a few trials you will usually get to the bottom of the problem. However, when dealing with very complex plans, hundreds operations long, this is not really an option. When dealing with such plans a few months back, I developed for myself a simple tuning method that allows to evaluate with high accuracy potential tuning benefits of plan operations, using rowsource stats and optimizer as input. In this post, I’m sharing this method, as well as a script that implements it.

Continue reading “Efficiency-based SQL tuning”

Accessing data dictionary views from PL/SQL

As most of database developers who have to work with database performance analysis and tuning, I have my own collection of favorite sqlplus scripts for various occasions. Over years, I have turned many of them into PL/SQL code which allows greater flexibility. Imagine, for examples, trying to call a sqlplus script from another sqlplus script with both in and out parameters, or simulating autonomous transactions in sqlplus — obviously, such things are much easier in PL/SQL.  Of course, PL/SQL has certain drawbacks of its own, one of them being the rather peculiar way Oracle handles user security. More specifically, you cannot use privileges received via roles  inside PL/SQL code. This can be really annoying, especially for performance benchmarking code, which heavily relies on access to data dictionary views, granted via SELECT_CATALOG_ROLE or similar, e.g.: Continue reading “Accessing data dictionary views from PL/SQL”