Transactions

Chris Kelleher

Administrator
Staff member
I would highly recommend that you review the Progress Programming Handbook (either the printed documentation or E-DOC), chapter 12 on Transactions. Here is a quick summary:

How does Progress know where to start the transaction and how much work to undo or back out? The following transaction blocks start a transaction if one is not already active:

· Any block that uses the TRANSACTION keyword on the block statement (DO, FOR EACH, or REPEAT).

· A procedure block, trigger block, and each iteration of a DO ON ERROR, FOR EACH, or REPEAT block that directly updates the database or directly reads records with EXCLUSIVE-LOCK. You use EXCLUSIVE-LOCK to read records in multi-user applications. See Chapter 13, "Locks," for more information on locking.

Directly updating the database means that the block contains at least one statement that can change the database. CREATE, DELETE, and UPDATE are examples of such statements.

If a block contains FIND or FOR EACH statements that specify EXCLUSIVE-LOCK, and at least one of the FIND or FOR EACH statements is not embedded within inner transaction blocks, then the block is directly reading records with EXCLUSIVE-LOCK.

Note that DO blocks do not automatically have the transaction property. Also, if the procedure or transaction you are looking at is run by another procedure, you must check the calling procedure to determine whether it starts a transaction before the RUN statement.

Once a transaction is started, all database changes are part of that transaction, until it ends. Each user of the database can have just one active transaction at a time.

<BLOCKQUOTE><font size="1" face="Arial, Verdana">code:</font><HR><pre>
/* p-txn2.p */
REPEAT:
INSERT order WITH 2 COLUMNS.
END.
[/code]

This procedure has two blocks: the procedure block and the REPEAT block. The procedure block has no statements directly in it that are not contained within the REPEAT block.

The REPEAT block contains an INSERT statement that lets you add order records to the database. Because the REPEAT block is the outermost block that contains direct updates to the database, it is the transaction block.

At the start of an iteration of the REPEAT block, Progress starts a transaction. If any errors occur before the END statement, Progress backs out any work done during that transaction.

Note that data-handling statements that cause Progress to automatically start a transaction for a regular table will not cause Progress to automatically start a transaction for a work table or temporary table.

Consider another example:

<BLOCKQUOTE><font size="1" face="Arial, Verdana">code:</font><HR><pre>
/* ptxn3.p */
REPEAT:
INSERT order WITH 2 COLUMNS.
FIND customer OF order.
REPEAT:
CREATE order-line.
order-line.order-num = order.order-num.
DISPLAY order-line.order-num.
UPDATE line-num order-line.item-num qty price.
END.
END.
FOR EACH salesrep:
DISPLAY sales-rep rep-name.
UPDATE region.
END.
[/code]

This procedure has four blocks:

· Procedure block - There are no statements in this block, so Progress does not start a transaction at the start of the procedure.

· Outer REPEAT block - The outermost block that directly updates the database (INSERT order WITH 2 COLUMNS). Therefore, it is a transaction block. On each iteration of this block, Progress starts a transaction. If an error occurs before the end of the block, all work done in that iteration is undone.

· Inner REPEAT block - Directly updates the database but it is not the outermost block to do so. Therefore, it is not a transaction block. It is, however, a subtransaction block. Subtransactions are discussed later in this chapter.

· FOR EACH block - An outermost block that directly updates the database (UPDATE region). Therefore, it is a transaction block. On each iteration of this block, Progress starts a transaction. If an error occurs before the end of the block, all work done in that iteration is undone.

Hope this helps.

[This message has been edited by progresstalk (edited 28 December 1999).]
 

Chris Kelleher

Administrator
Staff member
I was just re-reading my post and forgot one very important thing: Transactions should NEVER (well, almost never) span user-interaction. Consider the user that hits an "Update" button, then goes out for a long lunch. That open transaction will not only lock that/those records, but will also cause the BI file to grow, since those clusters can not be re-used until the transaction is finished.

This is an example of a BAD programming technique:
<BLOCKQUOTE><font size="1" face="Arial, Verdana">code:</font><HR><pre>
FIND FIRST customer EXCLUSIVE-LOCK.
UPDATE customer.name.
[/code]

This transaction will stay open until the user is finished updating the customer's name.

A better approach might be something like:
<BLOCKQUOTE><font size="1" face="Arial, Verdana">code:</font><HR><pre>
FIND FIRST customer NO-LOCK.
PROMPT-FOR customer.name.
DO TRANSACTION:
FIND CURRENT customer EXCLUSIVE-LOCK.
ASSIGN customer.name.
END.
[/code]

This way the transaction occurs almost instantly, after the user is finished making changes. Note these are just simple examples.

Hope this helps.

-Chris
 
Hello Everyone, hope you all are well. :)

While working with transactions, I always keep in mind that my transaction scope should be as small as possible. But I am unable to envision the practical impact due to that. For example: if record scope is available outside the block then record is still in record buffer likewise what happen if transaction scope is long?

Please give your views on this.

Thanks & Regards!
Rajat.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
While working with transactions, I always keep in mind that my transaction scope should be as small as possible. But I am unable to envision the practical impact due to that. For example: if record scope is available outside the block then record is still in record buffer likewise what happen if transaction scope is long?

For a start I suggest you download the PDF documentation set for your release. In the ABL Essentials manual, read the chapters on Managing Transactions, Handling Data and Locking Records, Record Buffers and Record Scope, and Procedure Blocks and Data Access. They should answer your questions.

In short, long transactions reduce concurrency. The records you update remain locked until the end of your transaction. The longer it is, the longer you prevent other users from locking those records. Also, holding multiple locks concurrently due to large transaction scope greatly increases risk of using up all the entries in the lock table.
 

GregTomkins

Active Member
They (crazily long transactions) also prevent .bi files from being recycled, which can lead to DB crashes if you run out of disk space or exceed the 2GB limit.

I think the notion of keeping transactions as small/short as possible is one of the (few?) things that every experienced Progress developer would agree on.

Disclaimer: the 2GB limit might not still apply, but it did a few decades ago when I dealt with DB crashes personally.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
The 2 GB limit may apply, if your OS doesn't support large files or if your DB isn't enabled for large files, e.g. if it's Workgroup or if it's Enterprise without the Large Files flag set. The limit is per-extent, not for the whole area. But you're right, BI growth is another important reason to limit transaction scope.
 

RealHeavyDude

Well-Known Member
As Rob said: Long transaction don't go good together with concurrency and using server resources as efficient as possible. While you might not even notice it in your development enviroment it can bite you where it really hurts in a production environment where "everybody" competes for server resources:.
  • You might prevent interactive users from doing their work because they are getting messages that the record they area attempting to save to the database is in use by some other process.
  • Procedures running in batch mode might even hit the lock wait time-out ( default is 30 minutes ) and end with a stop condition when records stay locked.
  • You use much more system resources ( memory and disk space ) and you might even blow a limit ( lock table, before image space ) so that either the client process crashes or even the database server shuts itself down abnormally when it cannot write to the before image anymore ( this is to ensure consistency ). This might be as bad as having to restore the database from your last good backup and rolling forward you after images ( which, in such a scenario, you will really, really, really appreciate if you have it ).
  • The long transactions stay open the better the chance that something happens in between which causes it to be undone.
These are just some very relevant points. This is my opinion: You should always code clean and take responsibility for performance and not wasting resources. Today your application might just be used by that handful of forgiving users, but tomorrow - you'll never know - your application might need to scale for some hundred extra users and much more data because your company just acquired another one ...

From my experience: Most performance issues I have seen where caused by bad software. If that's the case and you are tuning the database server for better performance you are working on the symptoms.

Heavy Regards, RealHeavyDude.
 
Top