SQL Performance, Part IV. Heap tables.

26 May

Demo script

SQL performance, Part III. Data storage strategies

20 May

Imagine that you’re on a desert island where some pirates hid their treasure. You don’t know where exactly it is hidden, but you want to find it. How should you approach this – e.g. should you dig randomly? Or should you follow some sort of a system – and if yes, then which?

I think most readers have smelled a trap right away (it’s not a very sophisticated trap anyway) – of course it doesn’t matter how you do it. If it takes you X minutes to examine 1 square foot of the surface, and the total surface is Y square feet, it will take you anywhere from 0 to Y/X minutes (so, Y/X/2 on the average) to explore the entire island and find the treasure, no matter which system you’re following (or using no system at all). When there is no information whatsoever about the object(s) sought, all search strategies are equally (in)efficient.

Continue reading

SQL Performance, Part II — Disk I/O: metrics and scales

7 May

While query performance depends on a large number of things, overall scale of query performance for a given database is generally set by disk I/O speed. The most common type of a storage device used in databases is still a hard disk drive (HDD), so let’s consider how it works.

Continue reading

SQL Performance, Part I. Data access efficiency.

6 May

Database performance optimization deals with the problem of reducing time (and/or other resources) required to complete database queries by resolving various inefficiencies. There are many kinds of performance issues, and they can be categorized in many different ways. One important way is by scope: statement-level or instance-level. Instance-level performance tuning (which we are not covering in this mini-series) deals with such problems as excessive parsing, cache efficiency, concurrency etc. Statement-level performance tuning deals mostly with inefficient data access.

Continue reading

Giving back

6 May

Occasionally, I receive requests to explain stuff I talk about in my blog posts on a level accessible to beginners. Unfortunately, it’s not always possible to do so. You start to explain one little thing, and then you realize that in order to understand that thing, one would to understand a dozen other little things, and before you know it, instead of writing a short blog post you find yourself writing something of the size of Encyclopedia Britannica. I decided to try to go in a different direction. I’ve accumulated some beginner-to-intermediate-level material that hopefully should make certain topics in database performance optimization clearer. I will be posting them in form of short “lessons”. I don’t have how many such lessons will be in total — I guess this depends a lot on the feedback that I’ll be receiving. My plan is to start with statement-level optimization topics, and then, if this goes well, to move on to instance-level stuff. I will try to: 1) provide a high-level overview without getting caught up into details 2) present a holistic view of Oracle database performance rather than asystematic assembly of performance-related topics 3) “derive” facts from higher principles rather than simply state them 4) use analogies to make basic performance concepts more intuitive 5) be as quantitative and specific as possible 6) heavily use demos. Hope you’ll find it useful.



Part I, Data access efficiency

Part II, Disk I/O: metrics and scales

Part III, Data storage strategies

Observer effect

2 Apr

In physics, one important limitation of any experiment is the fact that an act of observation inevitably interferes with the observed process (“observer effect”). Same thing is true about databases. It is a well known fact that, for example, turning on tracing can significantly slow down the process for which it’s enabled. But there exist even nastier forms of this effect: for example, when you try to trace a SQL statement using nested loop batching mechanism, apparently, the batching gets disabled! This is extremely confusing, because plan statistics in the trace file indicate that the batching takes place (i.e. the plan shows the “double nested loop” shape characteristic for NLJ batching), and if you look at the plan using dbms_xplan.display_cursor using “outline” option, you’ll see NLJ_BATCHING among other hints.

This effect was discovered by Randolf Geist, and published in the part 3 of his wonderful series on Logical I/O evolution (see here, here and here). It is very thorough and very detailed. Because of the level of detail in there it’s rather lengthy, which is why not everyone gets to that particular paragraph, so let me quote it in here:

…be aware of an odd behaviour I’ve experienced during my tests: If any kind of row source statistics sampling was enabled by either using STATISTICS_LEVEL = ALL, the GATHER_PLAN_STATISTICS hint or even enabling (extended) SQL trace, the optimized, batched form of physical I/O could not be reproduced. You could tell this from the session statistics that start with “Batched IO%” – these all stayed at 0. Only when disabling all these things the effects were visible and the corresponding statistics where non-zero. I don’t know why this is the case, but it is an important detail when testing this feature.

I think it is important that the community be aware of this behavior, because not knowing about it made me waste several days of work, and I know at least one other person who also had difficulties of the same type. But aside from this particular oddity, this is also a reminder of importance of the “observer effect” in general. So any findings obtained with intrusive methods (like extended SQL trace or statistlcs_level = ALL etc.) need to be cross-checked with less intrusive methods (e.g. dumps from V$MYSTAT, V$SESSION_EVENT, V$SESS_TIME_MODEL etc., or SQL real-time monitor reports).

The obvious downside for using V$ views is necessity to take the baseline, but there is a simple workaround if taking a baseline is inconvenient: one can simply establish a new session, thus resetting all counters and statistics to zero.

Optimizing work environment. Part II, command line

10 Mar

(the first part can be found here)

Now let’s consider command line. There is a lot of things that can be done to improve user experience here as well.

1) Maintain a local script repository on the machine(s) you’re using the most. To make sqlplus search there, define SQLPATH variable (using Control Panel => System => Advanced => Environment variables on Windows, or shell initialization scripts such as .profile or .bash_profile on Unix).

2) To facilitate synchronization of scripts among different environments, use web-repositories. One option here is to go with a plain http (or ftp) repository e.g. using any (free or paid) hosting service, or your own web server if you have any. The other option is to go with a code repository tool like git that utilizes the secure (https) protocol. Plain http has the advantage of you being able to run scripts directly from the online repository (see here for more detail). But if your internet connection requires proxy, this may prove difficult (setting HTTP_PROXY variable might help, but I myself had no luck with it).

Continue reading

Optimizing working environment. Part I, GUI

6 Mar

Working with database performance troubleshooting means dealing with challenging and exciting problems. But it also means doing certain things over and over. And over. Like typing the same query or clicking the same button zillion times a day. And when you’re doing something very frequently, anything that can be done to simplify or automate such tasks becomes very important. So I decided to share some of the tricks I use to optimize my working experience in a short mini-series. This post will be on GUI, and the part II will be on command line.

Continue reading

HotSOS symposium 2015

5 Mar

This week I participated (online option) in HotSOS conference — the only conference that is focused entirely around Oracle database performance and internals. Like a year ago, it was a great conference, I really enjoyed it. Predictably, the number one topic was the in-memory option. Probably every third talk was about it. I learned a lot of stuff I didn’t know before (and I strongly suspect that this stuff wouldn’t be easy to find anywhere else).

Continue reading

Analyzing segment content by block type

26 Feb

Occasionally one might want to know what a segment is made of in terms of block types. For example, you notice that the number of blocks in an index segment is somewhat larger than the number of branch and leaf blocks, and wonder what kind of blocks accounts for the difference. The only way to do this is by dumping index blocks (e.g. as described in Richard Foote’s blog here). Dumping blocks is easy, but analyzing them — not so much. Sure, there exists a plethora of tools that allow to parse text from the OS side (awk, perl, sed and whatnot), but this leads to usual problems: OS access, scripting skills, certain platforms may not have the scripting tool you’re most comfortable with, and even more importantly: scripts cannot do cool stuff that Oracle can (like joining data to other data) . Fortunately, those difficulties can be circumvented by using regexp + external files as I already posted in my blog here. This time, I’d like to show how this technique can be adjusted for index block dumps.

Continue reading


Get every new post delivered to your Inbox.

Join 300 other followers