A wrapper sqlplus script for dbms_xplan.display_cursor

When producing SQL execution plans with rowsource statistics, one has a choice whether or not specify the sql_id. It’s simpler to run dbms_stats.display_cursor without specifying sql_id, but it can be tricky, because then the plan for the last statement executed by this session will be displayed, and it can be not the statement you expected, especially when running the statement from a GUI like Toad. But even with sqlplus you can still get unexpected results, e.g. a plan for a hidden call to a dbms_output routine instead of the last statement you ran.

Below I describe a method to bypass this difficulty.

Method

Copy your SQL into a file (let’s call it slow_query.sql). Then in the same directory open a sqlplus session, and in it run the wrapping script as shown below:


set termout off
set serverout off
set linesize 200
set pagesize 9999
alter session set statistics_level = all;
@slow_query.sql;
spool slow_query_plan.lst
select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));
spool off

You can save it to a script and used it repeatedly with different queries (all that needs to be changed is the name of the script with the query on line 6).

The script produces a text file with nicely formatted plan with rowsource statistics, ready for analysis or posting. Note that since the output is suppressed

(for the reasons explained in the next section), in case of a SQL error you won’t see one — if that happens, just temporarily re-enable the output (set termout on,

set serverout on) to find out the cause of the error.

Explanation

If you’re not interested in how it’s working, then you can stop reading — just copy-paste the script below, and you’re all set.

For those who are more curios, I’ll describe the purpose of individual instructions in the script:

set termout off — suppress output of query rows on the screen. This is necessary, if the query produces many rows of output: displaying them would slow down the script considerably

set serverout off — I’m not entirely sure how, but this line takes care of wrapped sqlplus calls to dbms_output.disable or dbms_output.get_lines. If you don’t add this line, then you may end up having the plan for one of such call instead of your statement

set linesize 200 — this makes sure that lines aren’t wrapped

set pagesize 9999 — prevents headers of plan columns from appearing on each screen

alter session set statistics_level = all — this is the key to obtaining rowsource statistics (i.e. how many rows were produced at each step and how long the step took), without it the execution plan is almost useless for tuning

The rest of the script should be fairly self-explanatory.

SQL efficiency

Bad plan or something else?

The first step in any tuning activity is determining the scope of an issue — so if a query is not performing satisfactory, then the first question is whether it’s query’s execution plan or something else (e.g. a global database issue or even a problem external to the database). Full diagnostics may be time-consuming or even inaccessible, so it’s desirable to resolve this question by just looking at the query and its basic statistics. There is a method of doing that using SQL efficiency, and in this post I’m going to describe it.

The method is not my invention. I read about it in Christian Antognini’s book “Troubleshooting Oracle Performance”, but in fact I was using it in a slightly different form long before. Continue reading “SQL efficiency”

The case of a stuck query (mutexes in 10g)

In my earlier post, I mentioned an interesting case which among other symptoms, featured high CPU usage. In this post I would like to tell more about that case.

I was investigating poor performance on a 10.2.0.5 Oracle database, and I was asked to look at it after yet another outage (the database wasn’t accepting any connections for over two hours). The AWR report showed massive mutex and latch contention:

Continue reading “The case of a stuck query (mutexes in 10g)”

AWR reports: interpreting CPU usage

Introduction

Let’s start with some basic concepts. AWR reports deal with several kinds of time. The simplest kind is the elapsed time , it’s just the interval of time between the start and end snapshots. Another important quantity is DB time, which is defined as time in user calls during that period. It can be (and for a busy system typically is) greater than the elapsed time. However, the reason for that is not the number of CPUs as some experts incorrectly state (apparently, they confuse it with CPU time that we’ll discuss below, e.g. here), it’s that this time is a sum over all active user processes which are using CPU or waiting for something. Note that it only counts time spent in user calls, i.e. background processes are not included in that. Continue reading “AWR reports: interpreting CPU usage”

A problem with dbms_stats.copy_table_stats

Last week, when I was tuning a SQL statement at a 10.2.0.4 database, I came across a rather interesting issue.

The SQL was performing poorly because of an inefficient plan (wrong join order), and it was relatively straighforward to determine the reason for that: a table partition appeared empty on the plan, while in fact it contained a few hundred thousand records. The table was range-partitioned on a timestamp column (1 partition = 1 day).

As the development team explained to me, they were relying on a rather sophisticated maintenance procedure that was looking over a previous period to chose a suitable partition as the source, and then copied its stats into a newly created partition before it was filled with data to prevent it from being analyzed as empty. Continue reading “A problem with dbms_stats.copy_table_stats”

SQL tuning: real-life example

An example of tuning — nothing special, but it does illustrate several aspects of tuning work, so I thought I’d make a blog post out of it.

Yesterday I as contacted by a development team working with a 10.2.0.4 database. Their complaint was the traditional “the system is slow”. Despite very unspecific symptoms, they were very specific about the time when it all started. I didn’t have access to AWR on that database, so instead I looked at top wait events in DBA_HIST_SYSTEM_EVENT for recent snapshots, nothing interesting. The top wait event was db file sequential read (with db file scattered read as a very distant second). Since that particular group was working with just one user in the database, I looked at ASH data for this user — same thing, just a bunch of db file seqential reads. Continue reading “SQL tuning: real-life example”

Reading AWR reports: case study

In my previous post I described some sections that are typically useful when interpreting AWR data. However, sometimes the answer comes from an unexpected source. For example, the workload profile section of the report contains key information for understanding what the database looks like, but it seldom gives a direct answer to the problem (except for maybe excessive parsing and excessive commits). But recently I came across a case when this section was enough to identify the root cause of a non-trivial issue: Continue reading “Reading AWR reports: case study”