A rather odd performance issue happened at work a few days ago. All of a sudden, one of the databases started to work very slowly, and a quick look in ASH data showed that it was spending over 70% of its time waiting on “row cache lock” (not to be confused with “latch: row cache lock”).
It was a test database (188.8.131.52) on an overloaded server with non-standard configuratioin, so my initial reaction was to bounce the database and hope that the symptoms would go away, but it didn’t work. So there was no other choice but to investigate the root cause. The next simplest thing to “turning it off and then back on” was finding the blocking session in ASH (the V$ACTIVE_SESSION_HISTORY view has columns BLOCKING_SESSION and BLOCKING_SESSION_SERIAL#). Most of the entries for this wait event had NULLs in these columns, but among those NULLs there were sid’s and serial#’s of two sessions. Unfortunately, both sessions were from background processes, MMON and SMON, so killing them obviously wasn’t a tempting choice. MMON is responsible for collecting AWR data and such, so the instance could probably survive if it gets killed, but SMON must always be running or the instace would crash.
So I looked at parameters of this event. The first parameter, P1, was always 3. According to the documentation, that parameter means the cache id, and the name of the cache can be found in V$ROWCACHE. It turned out to be DC_ROLLBACK_SEGMENTS. With this information at hand, I searched MOS for bugs with matching symptoms, but all matches found were for other Oracle versions. I took the system state dump trying to find additional information, but all I got was same session id’s that I saw in ASH. I tried a few other things, but to no avail.
A colleague suggested to re-create rollback segments. At first, I didn’t consider this approach promising, but since things I deemed more promising failed to work, I decided to try it as well. Of course it was simpler to just re-create the UNDO tablespace rather deal with individual rollback segments, so I created a new UNDO tablespace, updated UNDO_TABLESPACE parameter, and tried to drop the old one. The DROP command failed because there was still an active rollback segment. I bounced the database, and tried to do the same. The rollback segment was still active (and row cache lock events were still eating up most of database time). I used _offline_rollback_segments parameter and bounced the database again. The old UNDO tablespace still refused to drop, and row cache lock waits continued to rampage. I checked the rollback segments associated with the old UNDO tablespaces and their statuses, and found that the problem rollback segment was in NEEDS RECOVERY state. Then I tried to drop the rollback segment manually, and that worked, although only at the second attempt (the first attempt ended up in an ORA-30025 error)! The row cache lock waits disappeared as suddenly as they came, and a few minutes later the DROP command for the old UNDO tablespace finally completed.
4 thoughts on “Row cache lock”
So the root cause is that some of rollback segments need recovery? If this, why?
thanks for visiting my blog and for your question.
I don’t know the exact mechanism that lead a rollback segment needing recovery to (almost) entire database hung up on “row cache lock” waits. This being a test database, I don’t have enough resources to dig into internals of this issue.
I can only assume that a background process that was performing the recovery of this rollback segment somehow deadlock either itself or another session accessing the same rollback segment. I’ve seen some dead-lock row cache lock bugs in MOS database. Their versions didn’t match so they’re probably not relevant, but there could be other unpublished bugs of the same nature.
Did you see any undo segment having status “Need Recovery” or it was online but still “needed recovery” and you found out only while dropping tablespace. I have similar scenerio row cache lock casuing DB hang with dc_rollback_segment as culprit but no undo segment needs recovery per dba_rollback_segments.
thanks for your question. Unfortunately I don’t remember if I even checked this. Like I said, this being a test database, it was not possible for me to spend too much time on this issue.