AWR reports: interpreting CPU usage


Let’s start with some basic concepts. AWR reports deal with several kinds of time. The simplest kind is the elapsed time , it’s just the interval of time between the start and end snapshots. Another important quantity is DB time, which is defined as time in user calls during that period. It can be (and for a busy system typically is) greater than the elapsed time. However, the reason for that is not the number of CPUs as some experts incorrectly state (apparently, they confuse it with CPU time that we’ll discuss below, e.g. here), it’s that this time is a sum over all active user processes which are using CPU or waiting for something. Note that it only counts time spent in user calls, i.e. background processes are not included in that.

Another important quantity is database CPU time. It can also exceed the elapsed time, because the database can use more than one CPU. Unfortunately, AWR reports use up to 3 different names for it: CPU time, DB CPU, and CPU used by this session. Normally, they should have close values, and differences can probably be attributed to connection management (e.g. establishing or tearing down a session). And of course “CPU used by this session” is an odd name for an instance-level metric, but that’s understandable: it’s just a sum of a session-level metric over all sessions.

CPU time represents time spent on CPU and does not include time waiting for CPU. Unfortunately, the latter quantity is not accessible via AWR (but there are indirect ways of extracting in via ASH, see here).

Finally, CPU consumption in the host operating system can also be important for trobleshooting high CPU usage. AWR provides these numbers in the “Operating Sysem Statistics” section (as “BUSY” and “IDLE”, the units are centiseconds).

DB time and DB CPU define two important timescales: wait times should be measured against the former, while CPU consumption during certain activity (e.g. CPU time parsing) should be measurd against the latter.

High CPU time

CPU usage is described by “CPU time” (or “DB CPU”) statistics. Somewhat counterintuitively, AWR report showing CPU time close to 100% in the top timed events section does not necessarily indicate a problem. It simply means that database is busy using CPU to do work for its users. However, if CPU time (expressed in CPU seconds) becomes commensurate to the total CPU power available on the host (or shows consistent growth patterns), then it becomes a problem, and a serious one: this means that at best, Oracle processes will wait lots of time to get on CPU runqueue. In the worst case scenario, the host OS won’t have adequate resources to run and may eventually hang.

Unfortunately, AWR reports only provide CPU time estimates either in absolute units or as a percentage of DB time, but not in terms of the overall capacity. It’s not wrong: you need to know what percentage of user calls falls on CPU time to see whether or not it’s contributing appreciably to response times. But it’s not complete, because when talking about resource usage you need to know what % of total resource available is being used. Fortunately, it’s quite simple to calculate that:

DB CPU usage (% of CPU power available) = CPU time / NUM_CPUS / elapsed time

where NUM_CPUS is found in the Operating System statistics section. Of course, if there are other major CPU users in the system, the formula must be adjusted accordingly. To check that, look at OS CPU usage statistics either directly in the OS (using sar or other utility available on the host OS) or by looking at IDLE/(IDLE+BUSY) from the Operating System statistics section and comparing it to the number above.

If DB CPU usage is at 80-90% of the capacity (or 70-80% and growing) then you try to reduce CPU usage or if not possible, buy more CPU power before the system freezes.

To reduce high CPU usage one needs to find its source within the database. The first thing to check is parsing, not only because this is a CPU-intensive activity, but also because high parsing means lack of cursor sharing, which makes diagnostics very difficult: each statement is parsed to its own sql_id, spreading database workload over thousands of statements which only differ by parameter values. Of course, this makes all “SQL ordered by” lists in the AWR report useless.

If parsing is reasonable, then one needs to look at SQL statements consuming most CPU (“SQL ordered by CPU time” in the CPU section of the report) to see if there is excessive logical I/O that could be reduced by tuning, or some expensive sorts that could be avoided, etc. It could also be useful to check “segments by logical reads” to see if partitioning or a different indexing strategy would help.

Unaccounted CPU time

Occasionally, CPU time may underestimate the actual CPU usage because of errors and holes in database and OS kernel code instrumentation — then one needs to rely on OS statistics to figure out how much of the OS CPU capacity the database is using.

In this case, when looking for the source of high CPU usage within the database, in addition to OS tools (top, sar, vmstat etc.) one can use indirect indications of high CPU consumption, such as:

– missing time in the “timed events” section (sum of percentages in top-5 significantly below 100%)
– high parsing (ideally CPU usage during parsing should be accounted for in “CPU time”, but that’s not always the case)
– mutex-related wais, such as “cursor: pin S wait on X” etc. (either because of high parsing, or bugs, or both)
– logon storms (high number of logons in short time)
– resource manager events (“resmgr: cpu quantum”),

or look in ASH for sessions with the “ON CPU” state and see what they are doing.


Let’s consider a few examples.
Example 1


DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxx         xxxxxxxxx   xxxx               1  NO  xxxxxxxxx

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     66607 02-Mar-12 12:00:52       648      19.6
  End Snap:     66608 02-Mar-12 12:30:54       639      21.4
   Elapsed:               30.04 (mins)
   DB Time:            3,436.49 (mins)

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
resmgr: cpu quantum                 475,956     152,959    321   74.2  Scheduler
CPU time                                         47,879          23.2
db file sequential read           3,174,880      15,866      5    7.6 User I/O
db file scattered read              196,255       4,078     21    2.0 User I/O
log file sync                       157,730       4,579     29    4.4 Commit
-> Total time in database user-calls (DB Time): 104720.3s
Operating System Statistics           DB/Inst: ****/****  Snaps: 66607/66608

Statistic                                       Total
-------------------------------- --------------------
BUSY_TIME                                   5,707,941
IDLE_TIME                                           1
NUM_CPUS                                           32

This is a simple case: the report has “CPU starvation” written all over it. CPU time (47,879s) — even though not the largest wait event in the database — is close to the maximum capacity (32 x 30 min x 60 sec/min = 57,600). The top wait event (resmgr: cpu quantum) indicates that the database user calls are spending most of their time waiting for the Resource Manager to allocate CPU resource to them — that’s another symptom of extreme CPU starvation. And finally, OS stats are confirming that CPU is completely maxed out: 1 centisecond of idle time versus 5,707,941 busy!

Fortunately, SQL ordered by CPU time is just as unambiguous: it showed one SQL statement responsible for 60.99% of DB time, and fixing it (it was a bad plan with poor table ordering and millions of context switching because of a PL/SQL function calls) fixed the entire database.

Now let’s consider something less trivial.

Example 2


DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxx         xxxxxxxxx   xxxx               1  NO  xxxxxxxxx
Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     38338 08-Mar-12 02:00:40       673       6.7
  End Snap:     38339 08-Mar-12 04:29:22       760       5.6
   Elapsed:              148.70 (mins)
   DB Time:           77,585.95 (mins)
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
cursor: pin S                  ############   2,072,643      2   44.5      Other
cursor: pin S wait on X          76,424,627     929,303     12   20.0 Concurrenc
latch free                            1,958     246,702 ######    5.3      Other
CPU time                                         58,596           1.3
log file sync                       746,839      44,076     59    0.9     Commit
-> Total time in database user-calls (DB Time): 4655157.1s
Operating System Statistics
Statistic                                       Total
-------------------------------- --------------------
BUSY_TIME                                   6,327,511
IDLE_TIME                                      24,053
NUM_CPUS                                            7

There are quite a few remarkable things in this report. And there is a good story to it, too, but I’m hoping to make a separate post about it, so let’s focus on CPU stuff here.

The time period of the report spans 148 min, but DB time is 77,586 min, which means that there were ~524 active sessions on the average. If we compare that to the number of sessions (673/760 beginning/end), we can see that even the database was terribly busy, or, yet more likely, most of the sessions were waiting on something. The list of timed event confirms this: it shows massive mutex contention in the library cache.
Now let’s look at the CPU time here. It’s 58,596 s, or just 1.3% of DB time… negligible! Or is it?… Let’s compare it to the total CPU time available: 148 minutes times 7 CPUs times 60 seconds per minute equals 62,454 s — i.e. the database alone was responsible for 93.7% of the CPU time during a 2.5 hour interval! More likely, it started off at a moderate level, and then for a good portion of the interval stayed close at 100%, which averaged to 93.7%.

If we look again at the wait events, we don’t find any mention of CPU time at all! However, if we do the math, we can find an indirect indication: 44.5+20+5.3+1.3+0.9=72, so where did the remaining 28% go?… Also, cursor: pin S wait on X, cursor: pin S are both mutext waits, which can burn CPU at a very high rate (see here for details). This gives us a good idea of how the CPU is wasted (and if one looks in ASH, one can find where exactly it happens, but that’s beyond the scope of this post).

In this case, “SQL ordered by CPU time” was useless for finding the source of high CPU usage, because many SQL statements were not using binds. The culprit was found by looking in the ASH (actually, that requires a bit of work, too, but I’m hoping to make a separate post about it), and fixing it fixed the problem.

Let’s consider another case.

Example 3


DB Name         DB Id    Instance     Inst Num Release     RAC Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxx         xxxxxxxxx   xxxx               1  NO  xxxxxxxxx
              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:     33013 02-Apr-12 10:00:00       439      27.1
  End Snap:     33014 02-Apr-12 11:00:12       472      24.4
   Elapsed:               60.20 (mins)
   DB Time:              520.72 (mins)
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
CPU time                                         15,087          48.3
db file sequential read          28,442,386       8,758      0   28.0   User I/O
enq: TX - row lock contention         1,459       3,633   2490   11.6 Applicatio
log file sync                        89,026       2,922     33    9.4     Commit
db file parallel write              169,289       2,783     16    8.9 System I/O
Operating System Statistics
Statistic                                       Total
-------------------------------- --------------------
BUSY_TIME                                   5,707,941
NUM_CPUS                                           64

Here, CPU time is responsible for almost half of the DB time. This looks big. Does this mean we should rush to buy more (or faster) CPUs? Probably not, since the CPU time (15,087) is only a small fraction of available CPU resource (64 CPUs x 60 mins x 60 s = 230,000s). OS stats also show that CPU is not a scarce resource on the system (211,335s idle vs 19,831s busy).

Of course, this doesn’t mean that tuning SQL to reduce CPU consumption won’t help here — it will, it just won’t be a global effect. Therefore, it would make sense to tune based on business priority, not on the amount of CPU usage.


Troubleshooting high CPU usage with AWR reports can be tricky and may require other tools (like ASH). While most waits are compared to DB time, CPU time should also be compared to the total CPU capacity on the host.


1. Kyle Hailey, Oracle CPU time
2. Tanel Poder,cursor: pin S waits, sporadic CPU spikes and systematic troubleshooting
3. Andrey Nikolaev, Mutex waits. Part 1. “Cursor: Pin S” in Oracle 10.2-11.1. Invisible and aggressive

Many thanks to Alexander Anokhin ( for very useful discussions and to Howard Guess for reviewing the post.

Other posts on CPU usage

Other posts on AWR

23 thoughts on “AWR reports: interpreting CPU usage”

  1. I was very pleased to find this site.I wseihd to with thanks for this excellent read!! I definitely enjoying every little bit of it and I have you saved to check out new stuff you article.

    1. Hi Yasir,

      I’m not an expert on RAC, but as far as I know with the exception of the overhead due to the global cache and some gc events, performance tuning of RACs is pretty much the same as that of standalone instances. So yes, I think it applies to RAC as well.

      1. Hi Savvinov,
        What i wanted to know is lets suppose its a 2 node RAC database with 23 dual cores=46 NUM_CPUs. For a 30 minute interval AWR report, we have 46*30*60=82800 seconds of CPU time.
        Now as there are two nodes,to calculate total CPU usage, do we have to calculate CPU usage of each node and add them?
        Does each node has 82800s available?
        If I want to calculate CPU consumtion of a single node,then how should i proceed?

        1. You can call me Nikolay (my first name).

          In your example, you have 2 nodes, each with 23 CPUs. Over a 30-min (1800s) interval, this corresponds to 1800 x 23 = 41400 CPU-seconds per node, or 82800 CPU-seconds per the entire cluster.

          If you are analyzing an AWR report for a single node, then you should be comparing your CPU usage to the total CPU power of the node (41400 CPU-seconds). When looking at the global report, the natural scale is the total CPU power of the entire cluser (82800 CPU-seconds).

          You can cross-check these numbers against the numbers in the OS stats section of the report (for a single node, IDLE_TIME + BUSY_TIME should be close to 4,140,000 centiseconds, for the entire cluster, to 8,280,000 centiseconds).

          Hope this helps.

  2. Hello NIkolay,

    If a session waits on a particular event, will it increment the CPU Time ?
    We ‘ve a resource mgnr in place which should cancel any SQL running(elapsed time) more than 120 seconds. Since 11gR2, seems the resource manager cancels it based on the cpu time and not elapsed time and due to this, these long running SQLs aren’t cancelled. Let say if i want to fetch some data with some sort. Will the session be on the CPU all throughout ? Or, is it that if the session is “ACTIVE” it is always on the CPU ?


    1. Hi John,

      1) an ACTIVE session is either on CPU or waiting for something, i.e. waiting does not increase CPU time (here I use “waiting” in a limited sense, i.e. waiting for an OWI event)

      2) I cannot say why the Resource Manager wouldn’t cancel long-running queries without knowing how exactly you’re imposing the 120s limit and what exactly the long-running session is doing. It is also not clear to me what is the expected behavior in the following case:

      open cursor for query;
      fetch 1 row;

      (120 seconds later)
      fetch remaining rows;

      Should resource manager cancel the query or not?

      An interesting issue though, would be happy to take a look if you send me a reproducible test case (or as close approximation to it as you can get) to nsavvinv at gmail dot com.

  3. Hi NIkolay,

    >>1) an ACTIVE session is either on CPU or waiting for something, i.e. waiting does not >>increase CPU time (here I use “waiting” in a limited sense, i.e. waiting for an OWI event)

    I guess even though logfile sync is an OWI event,it’s value is exaggerated on a starved cpu. Do we have any definite list or logic that dictates that a wait event time is high on a starved cpu?

    1. Hi Vijay,

      indeed when a system is extremely CPU starved, wait statistics may be inaccurate in one way or another. I don’t think it’s possible to predict which events will be high on a CPU starved systems — it depends on too many factors.

      Best regards,

  4. Hi NIkolay
    i am suprised why the sum of top 5 events % Total Call Time is not 100%
    could you tell me the possibility of this situation

    Top 5 Timed Events

    Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
    gc buffer busy 1,460,312 93,722 64 6.1 Cluster
    CPU time 72,210 4.7
    latch: cache buffers chains104,590 25,189 241 1.6 Concurrency
    gc cr multi block request 532,411 15,469 29 1.0 Cluster
    gc cr block 2-way 162,342 10,660 66 .7 Cluster

    Elapsed: 29.64 (mins)
    DB Time: 25,580.93 (mins)

    we have 48 core

    and via vmstat we found cpu used rate was about 90%-100%

    thanks very much

    1. Hi Lee,

      you can’t expect AWR stats to be accurate when the server is experiencing extreme CPU starvation. In such cases wait time percentages often sum to a number higher than 100%, or a very small number close to (I think it’s even possible to see negative numbers, although I’m not 100% sure).

      Besides, since you know from vmstat that your problem is CPU, you don’t need to look at your wait stats. Nothing will work properly until you resolve your number 1 problem, lack of sufficient CPU resources.

      Best regards,

      1. thanks very much and when the same time that awr period,
        and we found that many session was active and waiting event is sql net message to client ,there is about 500 session wait for that event, and more and more ,could you tell me that so many active session wait ‘sql net message to client’ event


        1. Hi Lee,

          this is not normal behavior. SQL*net message is an idle wait, so the session waiting on it shouldn’t be active. I suggest that you raise a service request with Oracle support to figure out the root cause of this anomality.

          Next time you have a question, please use “ask me” link on the top, or use the wizpert link provided in there. That would help keep the blog tidy. Thanks!

  5. sorry about the format ,i edit it but when post it ,i found the format is not easy to see,
    and the sum of % Total Call Time to top 5 is just 14%

  6. Hi Nikolay,

    You mentioned that CPU time, DB CPU, and CPU used by this session should have similar result.
    I have checked with single instance and with 5 node RAC here.
    Apparently in Single instance, your statement is true.
    However, in RAC the value of the “CPU used by this session” is not even close to the value of CPU time and DB CPU.

    on the 5 node RAC,
    DB CPU and CPU time = 9,950 s
    CPU used by this session = 397,391

    it was taken in a 59.91 minutes period.

    have you checked this in RAC environment?
    Can you shed some lights here?

    Thank you,

    1. Hi Adhika,

      first of all, “CPU used by this session” is measured in centiseconds, so if you convert it to seconds, it becomes closer to DB CPU/CPU time values.
      Of course, the remaining difference is still large. There could be different possible explanations for this — I cannot tell you which it is without
      having the entire picture.

      If Active Session History data is still available, use it to check which of the metrics is closer to reality. Also, look at OS statistics section of
      the report, it may give some additional information.

      Best regards,

  7. Hi Savvinov,
    Very informative article.However, I am not able to understand the point you make in example 2 (with reference to unaccounted CPU time if i understand right)
    1,You see that the percentages added up is only 72% of the total time and 28% is unaccounted. You mean to say that 28% must be a part of CPU time metrics under top 5 ?
    2,Also the percentages are made with db time as denominator, so how this can relate to CPU time where we take total CPU time as the denominator(no.of.cpu*elapsed time*60)?

    You say that it shows how CPU is wasted, but really i don’t see where you get that from the numbers.Please clear my doubt.

    1. Hi Kathrik,

      in the 2nd example all the relevant information is found in CPU metrics. The discussion about unaccounted wait time is “extra” (because there could be cases when CPU stats are more ambiguous, but missing time in AWR top events may indicate a CPU issue). So if 28% of DB time is unaccounted for, this means that it’s spent not in waits or not using CPU, but doing something else. E.g. spending time in uninstrumented database kernel code paths, or waiting to get on CPU. It could also mean a bug affecting stats calculation, of course. In any case, missing time on top events is not enough to draw any definitive conclusions, but it should be enough to make the investigating DBA alert about something wrong going on here (and therefore double-check the metrics, including CPU-related ones).

      Hope this helps.

      Best regards,

    1. Hi Asker,

      I’m not 100% sure, in most cases they are close. One reason why there could be two metrics instead of just one is background CPU time: DB CPU doesn’t include it. For CPU time, I haven’t found a definitive answer, so it might be that “CPU time” = “DB CPU” + “background CPU time”.

      It shouldn’t be hard to test this theory if you have some AWR reports at hand (or can generate some).

      Best regards,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s