“4k bug” is not a bug

15 Dec

A couple of weeks back I received an update on my “4k bug” SR (slow multirow DML performance for small block sizes). As it turns out, the observed behavior is not a bug. It’s actually more interesting than that. It is an undocumented feature that controls the insert mode depending on the estimated row size (the sum of all column sizes). If the estimated column size exceeds the size of the block, then Oracle switches to row-by-row processing.

This can be illustrated with the example below (in a 4k tablespace):

select * from v$version;
  
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE	11.2.0.3.0	Production"
TNS for Solaris: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production  

set autotrace on statonly

create table t4k(x number, padding varchar2(4000));

insert into t4k 
select level, rpad(' ', 100, ' ')
from dual
connect by level <= 10000;

Statistics
----------------------------------------------------------
          1  recursive calls
      11200  db block gets
        313  consistent gets
        109  physical reads
    3733140  redo size
        830  bytes sent via SQL*Net to client
        857  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed


create table t4k2(x number, padding varchar2(2000));

insert into t4k2
select level, rpad(' ', 100, ' ')
from dual
connect by level <= 10000;


Statistics
----------------------------------------------------------
          1  recursive calls
       1255  db block gets
        616  consistent gets
         19  physical reads
    1317000  redo size
        838  bytes sent via SQL*Net to client
        858  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      10000  rows processed

As you can see, it sufficies to decrease the column length so that the maximum row length would become less than the block size to significantly improve performance (e.g. reduce redo generated by a factor of 3!).

This means that specifying large column size “in case” can get you in trouble, if it brings the total column size over the block size. Obviously, the smaller the block size, the larger the risk.

The query below can identify the tables that will have slow bulk DML because of the high maximum row length:

  select c.owner, 
         c.table_name, 
         sum(data_length) est_row_length, 
         ts.block_size
  from dba_tab_columns c,
       (select owner, table_name, tablespace_name 
        from dba_tables 
        union select table_owner owner, table_name, tablespace_name 
        from dba_tab_partitions
       ) t,
       dba_tablespaces ts
  where c.owner = t.owner
  and c.table_name = t.table_name
  and ts.tablespace_name = t.tablespace_name
  group by ts.block_size, c.owner, c.table_name
  having sum(data_length)>=ts.block_size
  order by 1, 2

For the tables returned by this query, bulk DML will be internally performed row-by-row, decreasing performance and increasing redo generation. If you want to change this, you’ll need to either shrink column size, or migrate the table to a tablespace with a larger block size.

Making sense out of optimizer trace files

9 Dec

Sometimes the optimizer makes really odd decisions. We can use optimizer (“event 10053″) tracing to obtain clues as to why such decisions were taken, but it’s not simple. First, such trace files tend to be big. Second, they don’t always contain the necessary information. There is a simple trick that can be useful in such cases.

First, I use “optimizer_features_enable” hint to try and find Oracle version where the logic changes. Of course, I don’t need to gather optimizer trace files for that purpose: it suffices to look at explain plan like this:

explain plan for
select /*+ optimizer_features_enable('x.x.x.x') */ ...

select * from table(dbms_xplan.display)

changing the version one by one.

If that odd behavior is present in all versions, we’re out of luck and the method won’t work. But if it the behaviour changes, then I take two optimizer trace files for the versions immediately before and after the change, and run a diff (e.g. using WinMerge or a similar tool) for the sections that list parameter values:

optimizer_compare

Then I go through the list of parameters that have different values, and plug them into the query text with opt_param hint one by one to see if any of them controls the behavior in question:

explain plan for
select /*+ opt_param('_optimizer_transitivity_retain', 'false') */ ...

 

It doesn’t work 100% of the time, but in general it has a decent shot at working because weird optimizer behavior is often coming from some advanced features introduced into it at some point, and when such features are added, they normally come with a parameter (as a rule, a hidden one) that allows to switch it off.

But if even that doesn’t work, there something else you can do: look further down in the optimizer trace file, at the list of bugfixes, and go through them one by one, searching in MOS if bug symptoms are similar to your case. But since the list of bugfixes is quite long, I am too lazy to ever do that myself.

Hope that helps someone.

Transactional integrity

4 Dec

In the database world (especially among the database developers) a commit is often viewed as some sort of a virtual “save” button — i.e. it’s something that you need to do to make your changes permanent. This is one of the reasons why developers are often happy to commit work as soon as they get a chance (even if it’s not properly finished), so that they wouldn’t lose their changes in case of an error. Unfortunately, they don’t always think the whole thing through — restarting a process interrupted half-way may be much more difficult than re-doing everything from scratch.

Continue reading

Excessive commits?

27 Nov

Let’s consider a hypothetical scenario. Suppose you have a process A that you want to run faster. This process doesn’t commit (much), so it doesn’t wait on log file sync. However, there is another multi-threaded process, B, that commits very frequently, and spends a lot of time on “log file sync”. You don’t care about the process B, your only goal is to make A go faster. After exhausting your tuning arsenal (to no avail), you go to a production DBA. He looks at the AWR report and says “you’re committing too much”. “Look how much time the system spends on log file sync”. You tell him that the process A doesn’t commit much and doesn’t suffer from log file sync waits, but the DBA insists: “Even so, reducing commits would improve the database performance health in general, and by doing so it would benefit your process as well. Besides, getting rid of all that log file sync noise would help us see the problem with process A more clearly”. You are convinced, and after spending some time with the code, you find a bunch of unnecessary commits inside loops. You remove them. You reduce the commit rate per second by several orders of magnitude! Your database is much healthier now! And your process A will now run… almost four times slower than it did before.

If the end of this story surprises you, it really shouldn’t. Let me show you AWR exceprts that simulate this situation:

Continue reading

Followup on the 4k DML bug

17 Nov

Another piece of good news — Oracle has opened a bug for yet another anomaly I reported earlier in my blog: row-by-row processing of bulk DML when the block size on the target table is less than the default 8k. So it’s now officially bug 20039770 – “DML SLOW WITH 4K BLOCK SIZE VS 8K BLOCK SIZE”. Their bug description seems a bit off (because 2K shows the same behavior as 4K, and 16/32K as 8K, so it’s not really a case of “4K VS 8K”), but I’m sure they update it accordingly in the course of the investigation by the development team. Unlike the log parallelism bug, this one is not yet open to public (maybe because they don’t have anything to put into the bug note yet, not even a workaround), I’ll post an update when that changes.

If you ever work with small block sizes then I highly recommend you familiarize yourself with this bug, because its impact on DML performance is quite big (I observed x3-x7 effect, but it could be even larger in a different setup). It makes rewrites of row-by-row logic via bulk DML pretty much useless, so if I had known about it earlier, it would have saved me a couple of weeks!

I will post another blog when Oracle concludes their investigation.

Log parallelism bug now has a number

14 Nov

A few weeks ago I wrote a post about log parallelism causing excessive log file sync waits. Ever since, I’m finding more and more examples how this bug affects OLTP and hybrid databases (and even some data warehouses)! For example, my current employer is a large organization that has several thousand databases (set up at different time by different teams), and according to the studies I conducted on a sample of a few dozen databases, no less than 15-20% of the total Oracle database real estate have this problem. In several cases the scale of the problem is simply scary: e.g. I found a database that spends 43% of its time on log file sync (!) on the average, reaching up to 78% (!!) during peak workloads. Based on the feedback to my posts here and on LinkedIn, the situation in other organizations is no better. It looks like “log file sync” is a very wide-spread problem, and log parallelism is one of the main causes (if not the main cause).

This is why I am happy to announce that Oracle development has created a bug and assigned it a number (19959089). The bug note doesn’t have much at this stage, but at least it does list setting “_log_parallelism_max” to 1 as a workaround, which should encourage more people to test and apply this solution. It would probably still be necessary to get the Oracle support to okay changing the underscore parameter, but a reference to this bug should make it much simpler to obtain. Hopefully, when the bug investigation is concluded, there will be a detailed official note and/or a patch available that would eliminate the need to raise an SR. I’ll post a blog about it when this happens.

Log buffer space

14 Oct

Introduction

Log buffer space is a simple, yet frequently misunderstood wait event. The main reason for that is probably its name. It sounds as if it points immediately to the answer: if space in the log buffer is the issue, then increasing it surely should resolve it. Well, unfortunately even though log buffer space is simple, it’s not that simple.
Continue reading

Log file sync from neither I/O nor CPU

8 Oct

Two most popular causes for log file sync waits are redo filesystem I/O and CPU (or CPU scheduling). I’d like to share an example of a third scenario.

A few days ago, I noticed a weird spike in log file sync average duration on a production 11.2 database:

select time,
       snap_id,
       round(time_delta/1e3/nullif(waits_delta,0), 1) avg_wait_ms,
       waits_delta num_of_waits,
       round(time_delta/1e6) total_seconds
from
(       
  select sn.snap_id,
         sn.begin_interval_time time,
         e.total_waits - lag(e.total_waits) over (partition by e.event_name order by e.snap_id) waits_delta,
         e.time_waited_micro - lag(e.time_waited_micro) OVER (PARTITION BY e.event_name ORDER BY e.snap_id) time_delta
  from dba_hist_system_event e,
       dba_hist_snapshot sn
  where e.snap_id = sn.snap_id
  AND e.event_name = 'log file sync'
) ev
WHERE ev.time_delta > 0 
order by time desc

Continue reading

Lies, damned lies and non production-like performance testing

24 Sep

Chasing cost efficiency, business often cuts back on money spent on UAT boxes used for performance testing. More often than not, this is a bad-decision, because the only thing worse than not having a UAT environment is having a UAT environment that is nothing like production. It gives a false sense of security while exposing your application to all sorts of nasty surprises. In this post I tried to summarize a few typical configuration differences between UAT and production which can affect performance test results in a major way.
Continue reading

High log file sync waits? Check log parallelism!

22 Sep

Log parallelism is an optimization introduced in 9.2 that reduces latch contention due to redo copy to the log buffer by enabling multiple public redo buffers (or “strands”). In many cases, however, it can cause a massive degradation of commit performance (“log file sync” wait increase). In this blog post, I will describe the mechanism, illustrate it with test results, and discuss solutions. Tests were performed on several different 11gR2 databases on Solaris servers.

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 413 other followers