SQL performance, Part III. Data storage strategies

Imagine that you’re on a desert island where some pirates hid their treasure. You don’t know where exactly it is hidden, but you want to find it. How should you approach this – e.g. should you dig randomly? Or should you follow some sort of a system – and if yes, then which?

I think most readers have smelled a trap right away (it’s not a very sophisticated trap anyway) – of course it doesn’t matter how you do it. If it takes you X minutes to examine 1 square foot of the surface, and the total surface is Y square feet, it will take you anywhere from 0 to Y/X minutes (so, Y/X/2 on the average) to explore the entire island and find the treasure, no matter which system you’re following (or using no system at all). When there is no information whatsoever about the object(s) sought, all search strategies are equally (in)efficient.

However, if you know something about the way pirates chose their stash, then the strategy may matter after all. Suppose that you don’t know the exact locations, but you know that pirates always allow at least a certain minimum distance between their stashes. You can use this information in your search: e.g. once you found one treasure, you can eliminate a certain area from the further search.
Popular game “battleship” is built on this principle: both players draw a 10×10 grid on a piece of paper. Then they draw a certain number of “battleships” of various sizes, and start “shooting” at enemy’s ships by calling out grid coordinates. If the cell is occupied by one of player’s ships, he must admit that his ship was hit.

Because of various limitations (e.g. two ships cannot occupy the same spot, two ships must have at least one spot between them, ships cannot be bent in arbitrary shapes, but rather occupy consecutive squares, etc.), the game is not completely random. Players use those restrictions to eliminate certain areas, increases their chances of winning.

The reason I’m talking about battleships and hidden treasures (aside from the fact that this is a far more exciting topic compared to the database stuff I usually get to write about) is that I want to illustrate following principles:
– smart search strategies only make sense in presence of some ordering
– smart search strategies increase search efficiency by eliminating areas from the search
– smart search strategies make use of information about mutual arrangement of objects.
Searching for data follows same basic principles as searching for objects. In order to an efficient search to be possible, the data need to be first laid out accordingly, using such strategies as

1) keeping data ordered by a search key (IOT)
2) maintaining an ordered (by a search key) list that contains physical locations of data (index)
3) calculating physical location for data directly from the search key (hash tables)
4) physical separation of data using search criteria (partitioning).

The strategies in this list aren’t mutually exclusive, they can be combined to some extent with each other – e.g. indexes can be partitioned, an IOT can have secondary indexes defined on it etc. We could also add item number 0 to that list: “no strategy”, just leave everything in a big unordered pile. This “no strategy” strategy corresponds to ordinary, or heap, tables, that don’t have any indexes defined on them. Such tables should normally be reserved just for temporary usage (regardless to whether they are formally defined as a GTT or not) – otherwise a table should at least have a primary key, which means a PK index (strategy 2 in our list).

Ordering makes search easier, but it comes with a price, because order needs to be maintained. So all manipulations become more expensive.

Let’s consider those approaches in more detail.

Ordering (IOT)

It’s always easier to find something in an ordered set, rather than a complete mess. “Order” means that locations of objects are not random, they follow some conventions. Therefore, by knowing location of one object, we can deduce something about other objects. Like in the game of battheship mentioned above: when enemy’s ship is sunk, cells around it can be crossed, because the rules of the game don’t allow ships to touch each other. Thus, a considerable area is eliminated from the game, and less guesswork is required to find the remaining ships.

When things are arranged by some value (e.g. shoes sorted by shoe size), this allows for a similar elimination. When we’re looking for a shoe size 9, and instead we found a shoe size 10, we know that what we need is on the left (assuming the ordering takes place from left to right in ascending order). The entire area on the right is eliminated. This “divide and conquer” approach is very efficient, because e.g. if in one step you can reduce the area to be searched in half, then in 10 steps we will reduce it 1024 times. So if we have a dictionary of 1,000 pages, we can find the word we need in just 10 steps — x100 improvement compared to the “full scan” of the dictionary!

When looking for a database block in an IOT, the same technique is used, only instead of guessing its location several times randomly, several layers of branch blocks are used to guide the search. Also, physically, IOT blocks aren’t always ordered (but data inside an IOT block always are), although leaf blocks have links to previous and next blocks which allows to view them as logically ordered.

As was mentioned above, the price for the convenience in search is increased cost of manipulation (DML).

Ordered list (index)

Indexes present a variation of the approach described above. Instead of rearranging the data themselves, we leave them as they are, but instead we create an ordered (by a search key) list that describes where everything is in the table (i.e. contains physical rowids). This approach has both advantages and disadvantages compared to the approach above.

The biggest advantage is flexibility: we can have as many ways to search data as we want by defining additional indexes. The biggest disadvantage is that instead of having just one object (as in case of IOT) we have two: the index and the table. When we read data using an index, we generally have to locate the rowid in the index first, and then go the table and read that row. That second operation can be very expensive when looking up large amounts of data.

Physical location encoded directly in search key (hash tables)

This is a little bit trickier so it’s harder to find a good example for it. Let’s try this: imagine that you have a big warehouse with lots of stuff in lots of boxes spread over large number of shelves organized in several rows. Each item store has a unique identifier (“article number”). Consider the following system: each item will be placed according to its article number, e.g. if the article number is 123456 then it will be in box 12 on shelf 34 in row 56. With this system in place, finding stuff is very easy, if article number is known.

This is the principle hash clusters are based on, only except for when using search key to determine the location of a row, hashing function is applied to it (in order to ensure that data is distributed more or less uniformly — otherwise both space efficiency and performance will suffer). There’s no need to maintain an index and do an an index lookup, the physical location of data is encoded in data themselves. However, there are minuses as well. Unlike indexes, hash tables aren’t good with range scans (because of the hashing function that maps close values of the hash key to distinct locations). And DML performance is also much slower. Inserting a 1000 rows would be like receiving a shipment of 1000 items — placing each one according to its article number will take a very long time.

Even though hash clusters aren’t very commonly used in Oracle, they are very important, because every table becomes a hash table when it’s used as a “build” rowsource in a hash join — one of the two most common methods in Oracle.

Physical separation (partitioning)

This is probably the simplest data storage strategy to grasp. For example, if we have a table FRUIT with column FRUIT_TYPE IN (‘APPLE’, ‘ORANGE’), then we can chose to partition on this column: i.e. keep apples separate from oranges.

So if you are only interested in apples — you’ll be dealing with a much smaller pile, which would make much faster to do most operations, including searching (querying).

One major downside of partitioning is that it doesn’t always mix well with indexes. A natural choice for a partitioned table is partitioning indexes as well, using same criteria (local partitioning). So each partition will have its own index segment. This will become a problem when the query doesn’t contain the partitioning key — in this case each index segment will need to be scanned separately. It’s generally much faster to look up a row in a big index rather than in 10 smaller indexes (for the same reason it’s faster to find a word in one 1000-page dictionary than to find 10 words in 100-page dictionaries). Global unpartitioned indexes don’t have that problem, but leaving some of the indexes unpartitioned can be less than ideal from the maintanance point of view.

That concludes the “blah-blah” part of this mini-series — from this point on, I’ll be using demos to illustrate basic data storage and search strategies, which would make things more interesting. Stay tuned!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s