Limit transaction to N records

pinne65

Member
We are running 10.2B and are prepping for a migrations. As a part of that we are purging a large number of records.

The way the purge currently goes is:

DEF VAR min-id AS INT NO-UNDO.
DEF VAR max-id AS INT NO-UNDO.

DEF VAR num-records AS INT INITIAL 0 NO-UNDO.
DEF VAR trans-active AS INT INITIAL no NO-UNDO.

DISABLE TRIGGERS FOR LOAD OF big-table.
DEFINE BUFFER buf-4-big-table FOR big-table.

FOR EACH big-table
WHERE min-id <= big-table.id AND big-table.id <= max-id NO-LOCK:
DO TRANSACTION:
FIND buf-4-big-table WHERE
RECID(buf-4-big-table) = RECID(big-table) EXCLUSIVE-LOCK.
DELETE big-table.
END.
END.
END.

I'd like to make it to not start a new transaction for every delete, but rather every 10, 100, ...

Something in the spirit of:

START WISHFUL CODE >>>>
FOR EACH big-table
WHERE min-id <= big-table.id AND big-table.id <= max-id NO-LOCK:
num-records = num-records + 1.

IF numre-cords MOD 10 = 0 THEN
DO:
IF trans-active = yes THEN
END TRANSACTION.
DO TRANSACTION.
END.

FIND buf-4-big-table WHERE
RECID(buf-4-big-table) = RECID(big-table) EXCLUSIVE-LOCK.
DELETE big-table.
END.
END.

IF trans-active = yes THEN
END TRANSACTION.

<<<<< END WISHFUL CODE.

The reason is that we only have small amount time to run this purge in the early am when nobody is on the system. Sometimes the purge runs long and risk to interfere with business and we want to be able to clobber the process without a MEGA rollback taking place. But we also don't want to BEGIN/END a transaction for every single record.

Anyone any ideas?

Cheers /Pinne
 

D.Cook

Member
Yep, you can do this with an outer loop and a logical flag. For example (I've just mocked this up and don't know if it'll compile but hopefully this gives the idea)

Code:
DEF VAR min-id AS INT NO-UNDO.
DEF VAR max-id AS INT NO-UNDO.

DEF VAR num-records AS INT INITIAL 0 NO-UNDO.
DEF VAR trans-active AS INT INITIAL no NO-UNDO.
DEF VAR repeat AS LOG INITIAL no NO-UNDO.
DEF VAR trans-limit AS INT INITIAL 10. /*adjust as necessary*/

DISABLE TRIGGERS FOR LOAD OF big-table.
DEFINE BUFFER buf-4-big-table FOR big-table.

REPEAT:
   repeat = no.
   DO TRANSACTION:
   FOR EACH big-table
      WHERE min-id <= big-table.id AND big-table.id <= max-id NO-LOCK:
         num-records = num-records + 1.
         DELETE big-table.
         if num-records mod trans-limit then
         do:
            repeat = yes.
            leave.
         end.
      END.
      END.
   END.
   if not repeat then
      leave.
END.

Also the purpose of your database buffer is unclear. If you look closely at your code you are deleting the default buffer (big-table) by implicitly upgrading a SHARE-LOCK to an EXCLUSIVE-LOCK. The other buffer (buf-4-big-table) is not actually used as far as I can see..
 

rzr

Member
Code:
[FONT=courier new]DEFINE VARIABLE min-id AS INTEGER NO-UNDO.
DEFINE VARIABLE max-id AS INTEGER NO-UNDO.
DEFINE VARIABLE iCnt   AS INTEGER NO-UNDO.[/FONT][FONT=courier new]
DISABLE TRIGGERS FOR LOAD OF big-table.[/FONT]
[FONT=courier new]
DEFINE BUFFER buf-4-big-table FOR big-table.[/FONT]
[FONT=courier new]
FOR EACH big-table 
   WHERE big-table.id GE min-id 
     AND big-table.id LE max-id NO-LOCK:[/FONT]
[FONT=courier new]
    Start_Transaction:
    DO FOR buf-4-big-table TRANSACTION WHILE TRUE:[/FONT]
[FONT=courier new]    
        DO WHILE TRUE:
            iCnt = iCnt + 1.
     
            FIND buf-4-big-table 
           WHERE ROWID(buf-4-big-table) = ROWID(big-table) 
                 EXCLUSIVE-LOCK NO-ERROR NO-WAIT.[/FONT]
[FONT=courier new]            IF NOT AVAILABLE buf-4-big-table THEN LEAVE Start_Transaction.
            DELETE buf-4-big-table.[/FONT]
[FONT=courier new]            IF iCnt MODULO 100 = 0 THEN NEXT Start_Transaction.
        END.
    END.
END.[/FONT]
 

TomBascom

Curmudgeon
Not to rain on the parade but...

This probably won't help.

In my testing I have found that grouping transactions is quite useful for creates and updates.

But it doesn't have much impact on deletes.
 

D.Cook

Member
Tom I'm guessing you're referring to performance; which is interesting to know because I would have assumed it would make a difference.

But for Pinne's situation where he wants to stop the process without rolling everything back, I think grouping it would still help. Also it would help in avoiding the max record lock error, which is why we do it when purging large amounts of records.
 

TomBascom

Curmudgeon
Yes, you could mitigate that by grouping. A group size of 1 will give the best performance (for DELETE).
 

pinne65

Member
I guess the title I did choose for this thread was a little misleading. I was really after expanding the transaction to encompass more records. But you already figured that out.

My situation also has changed: I need to run this purging during regular business hours in order to meet a deadline. But without affecting business. I will create a new thread about that I think.
 
Top