Hints

Oracle cost-based optimizer (CBO) is great, but sometimes it’s making wrong choices even when correct inputs are fed to it. In such cases, you need a tool to override CBOs choices, and one of the most popular tools is optimizer hints. The main reason they’re so popular is that they allow “quick-and-dirty” kind of fixes for performance issues (provided that query text can be altered). Other ways may be more reliable, but generally require more work, and who wants to do work that can be avoided? Unfortunately, there’s a well known downside to the hints — it’s very easy to run into problems if you only fix a part of the plan.

When this happens, hints can lead to terrible execution plans. For a long time, I’ve been looking for a good example to illustrate this problem, and finally this week I encountered a case which appears to be suitable for this purpose.

Continue reading “Hints”

Workarounds

It’s been forever since I last shared any of my performance troubleshooting experiences at work. This week, I got a case that I think is worth publishing, and I decided to write about it in my blog. So, here we go…

A few days ago, I received a complaint about unstable performance of one of frequently running SQL reports on a 11gR2 database. Most of the time it completed within a couple of minutes, however, on certain occasions it took much longer than that, and once it even took over 20 minutes.

I have a special stored report in SQL developer for conveniently displaying key statistics from DBA_HIST_SQLSTAT which is very helpful as a first step when analyzing unstable SQL performance:
Continue reading “Workarounds”

Plotting SLOB results in high resolution

Introduction

If you work with I/O benchmarking of Oracle databases, you are almost certainly familiar with SLOB. SLOB is more than just an I/O benchmark — it’s become a de-facto industry standard. It’s simple, powerful and efficient, and it captures a plethora of metrics, both from the OS (output of iostat, mpstat etc.) and the database itself (in the form of an AWR report).

One thing that is missing though is visualization. It’s fairly easy to fix using an external plotting tool (like gnuplot or R), but what data would you plot? AWR only gives you average event times and histograms with ridiculously poor resolution. And if you want to see a high-resolution picture of your I/O (and you do — I’ll discuss the importance of that later on), it’s not enough.

Continue reading “Plotting SLOB results in high resolution”

Nested loop internals. Part 3: comparative efficiency

In the previous parts (here and here) of the series we looked at some aspects of nested loop I/O optimizations, but we have left out the most important question (from the practical point of view): how these methods are doing time-wise? Which one(s) is(are) faster, and how much savings are they offering compared to the non-optimized plan? We will turn to these questions now.

Continue reading “Nested loop internals. Part 3: comparative efficiency”

Nested loop internals. Part 2: decision making

In the previous part of this mini-series we looked at differences in multiblock read behavior for different nested loop optimization mechanisms depending on degree of ordering of the data. In this post I’ll continue to explore the subject, but this time we’ll focus on decision-making process: what factors (other than the obvious ones — like optimizer hints and/or parameters) affect the specific choice of a mechanism?

Continue reading “Nested loop internals. Part 2: decision making”

Nested loop internals

Nested loop join appears like the simplest thing there could be — you go through one table, and as you go, per each row found you probe the second table to see if you find any matching rows. But thanks to a number of optimizations introduced in recent Oracle releases, it has become much more complex than that. Randolf Geist has written a great series of posts about this join mechanism (part 1, part 2 and part 3) where he explores in a great detail how numerous nested loop optimization interact with various logical I/O optimizations for unique and non-unique indexes. Unfortunately, it doesn’t cover the physical I/O aspects, and that seems to me like the most interesting part — after all, that was the primary motivation behind introducing all those additional nested loop join mechanism on the top of the basic classical nested loop. So I conducted a study on my own, and I’m presenting my results in the mini-series that I’m opening with this post.

Continue reading “Nested loop internals”

Peeking table block contents

Sometimes you want to know what’s inside a certain block. Of course, the most straightforward way to do it is by dumping block contents using ALTER SYSTEM DUMP DATAFILE contents and analyzing it. However, “straightforward” doesn’t mean “simple”. Block dumps represent its contents in binary format which is hard to read. Sure, there are various utilities (like utl_raw) that can help you convert everything to the human-readable format, but it’s going to be a tedious and time-consuming job, especially if you need more than just a few values from just a couple of blocks. Another problem is that you may not have access to the server OS shell (e.g. developers rarely have access to it even on non-production system, except maybe on private sandboxes).

There’s a better way, at least if the block you’re interested in is a standard table data block and all you want to know is what kind of data it contains (and not internal information like locks, flags, free space etc.). The idea is that rather than going to the block itself, you can use rowid to calculate the block address and the relative file number. If you know the table of interest, and if you do the calculation above for all its rows, then you simply filter out the particular block you’re interested in from the resultset. It’s really much simpler than it sounds, just bear with me a little and you’ll see.

The first step would be identifying the segment name. Normally you already know it from the very beginning. For example, if you found the address of the block that you want to look up in ASH or in a trace file, e.g. as p1 and p2 parameters of “db file sequential read” event, then you can simply take current_obj# and look up the object name in DBA_OBJECTS, using object_id (and/or data_object_id) as the key.

Once you have the table name, you can display block contents using rowid_relative_fno and rowid_block_number functions of dbms_rowid package:

select *
from
(
select id1,
id2,
dbms_rowid.rowid_relative_fno(rowid) fno,
dbms_rowid.rowid_block_number(rowid) block#
from &mytable
)
where fno = :fno
and block# = :blockno;

where &mytable obviously should be replaced with the name of the table identified in the previous step.

It could be convenient to aggregate the query above so that each block’s contents would be represented by a single row. For example, imagine that you have a table with a composite primary key (id1, id2), so that knowing these two values is enough to identify the row. Then you can use listagg to compactly represent block contents:


select listagg('(' || id1 || ',' || id2 || ')') within group (order by id1, id2) block_contents, fno, block#
from
(
select id1,
id2,
dbms_rowid.rowid_relative_fno(rowid) fno,
dbms_rowid.rowid_block_number(rowid) block#
from &mytable
)
group by fno, block#

This is particularly convenient when working with a relatively large number of blocks that can be extracted from another SQL statement, e.g. a query on an external table built on top of a trace or a dump file as described in one of my recent posts (e.g. here).

As a final touch, you can also add a query block to identify the table’s segment header to make sure you don’t miss it:

 


select 'segment header' block_contents,
header_file fno,
header_block block#
from dba_segments
where segment_name = '&mytable'

You can’t do the same trick on indexes as there’s no rowid for an index entry, but there are other ways to peek at contents of branch and leaf index blocks. If I get a chance, I’ll show one or two such methods in a separate blog post.

 

My 2015

Year 2015 was a very good one for me, even though not exactly in a way I expected it to be. I didn’t get to blog as much as I wanted to, and I didn’t get as many interesting performance troubleshooting to do as years before that. But there was lots of other interesting experiences — e.g. designing, running and analyzing all sorts of sophisticated performance tests for a candidate hardware platform.

Of course, the most important event of the year was moving to the UK, and the new challenges and opportunities this move presented. It was a very positive experience overall (although there are a few aspects of life in the UK that I still need to adapt to, but that’s perfectly normal).

I also spoke at Harmony 2015 conference in Tallinn (LGWR stuff from my previous year’s research), and that was also new and important experience for me. I’m hoping to do this more in the future. I attended a very interesting UKOUG Tech’15 conference in Birmingham (as a delegate, not a speaker), and had a few very interesting conversations there (in particular, I’m very grateful to Tanel Poder for finding some time for me — that conversation was extremely useful).

I am really looking forward to 2016. No one knows what it would bring us, but I have good reasons to expect great things from it. For example, there’s a good chance that I’ll get involved with some interesting topics, including Exadata, building integrated solutions using both relational and non-relational technologies, performance-related internals digging, and others (and of course I’ll cover the most interesting stuff in this blog).

I am hoping that 2016 would be an eventful and productive year for you as well. Happy holidays everyone!