In the database world (especially among the database developers) a commit is often viewed as some sort of a virtual “save” button — i.e. it’s something that you need to do to make your changes permanent. This is one of the reasons why developers are often happy to commit work as soon as they get a chance (even if it’s not properly finished), so that they wouldn’t lose their changes in case of an error. Unfortunately, they don’t always think the whole thing through — restarting a process interrupted half-way may be much more difficult than re-doing everything from scratch.
Another aspect that is often overlooked in database development is that committing is more than just making changes permanent. It also changes their visibility level. It publishes them. Until a transaction is committed, its work is only visible to its own session. After that, it’s visible to all sessions (well, except those whose isolation level doesn’t allow that), including those that might pick it up and post downstream. So a “commit” is not just a “save” button, it’s also like a virtual “send” button in an email client. If you ever pressed a “send” button accidentally when working on an important email, you can appreciate that it can cause a range of negative effects, from mild annoyance to serious trouble.
Transactions have a deep meaning. They are a way of grouping together several changes that only makes sense together, like subtracting an amount from one account and adding it to another in the classical example of a money transfer. Without them, the database would only be able to evolve from one consistent state to another via a series of publicly visible inconsistent states, leading to all sorts of issues.
Since a transaction has a meaning of its own, dictated by the business logic, it cannot be arbitrarily re-defined by a developer to suite some low-level technical goal, like improving performance or reducing the risk of ORA-1555. A transaction ends when all the actions that are a logical part of it are completed, not sooner, not later. Not respecting that will throw away something of fundamental importance for a very doubtful benefit (actually more often than not, for no benefit at all, as most recommendations to twist transactional logic are based on myths).
Of course, not every database transaction is equally sacred. Any database application has tons of technical, bookkeeping, logging information, that doesn’t need to meet all the high requirements of transactional integrity. Here, a developer can afford to adhere to less stringent rules (and use autonomous transactions, or batch information from logically different transactions together, or commit intermediate states etc.). But when the same is done with the core application data, it’s bad. It either means that bad things are about to happen to that application, or that it doesn’t really need transactional integrity. In the latter case, it means that investing thousands of dollars per core in license costs for a best transactional database ever was probably not a very wise decision for this business (so the bad thing already happened!).
So I would suggest to take following popular recommendations with skepticism:
– “commit less frequently because it would reduce log file sync waits” (reduce average log file sync wait instead; also, commits that are not necessary should be removed regardless to performance considerations)
– “commit more frequently because this protects from losing data” (losing an unfinished transation is better than having the database in an inconsistent state and then cleaning it up manually)
– “commit more frequently because this helps against ORA-1555” (no, it doesn’t, so you’ll be throwing transactional integrity out the window for no good reason at all).
The best approach is to stick to othe natural definition of transaction according to the business logic, and resolving all technical problems by other means. Oracle offers a rich variety of tools for that.