Lock table issue - Transaction problem

MSN

New Member
Hi Every body,

I'm encoutering a "lock table overflow" problem but I don't know why and also I don't know how to solve it :confused:

I have to update 500.000 records, and my database "lock table" attribute allows me to update 400.000. But It always failed :(
I try many solution and my "final try" is these code. What's wrong with it please? :confused:

edit:I want to precise that I'm updating more thant 20 tables in the same program, could it be the error?

(I assume that somes lines could be deleted, but as I'm totaly lost in this problem .... I don't want to do an other mistake :( )

Code:
FIND FIRST myTable WHERE <....> NO-LOCK NO-ERROR
DO WHILE AVAILABLE myTable :
 
      FIND CURRENT myTable EXCLUSIVE-LOCK .
             RUN update-records(<params>).
      FIND CURRENT myTable NO-LOCK .
      FIND NEXT myTable NO-LOCK NO-ERROR .
END.

update-records contains :

Code:
    DEFINE  INPUT PARAMETER ph-handle   AS  HANDLE      NO-UNDO .
    DEFINE  INPUT PARAMETER pi-a           AS  INTEGER     NO-UNDO .
    DEFINE  INPUT PARAMETER pc-b         AS  CHARACTER   NO-UNDO .
    DEFINE  INPUT PARAMETER pd-c          AS  DATE        NO-UNDO . 
 
    DO TRANSACTION :
        ph-handle:BUFFER-FIELD("fieldA"):BUFFER-VALUE =   pi-a    .
        ph-handle:BUFFER-FIELD("fieldB"):BUFFER-VALUE =   pc-b   .
        ph-handle:BUFFER-FIELD("fieldC"):BUFFER-VALUE =   pd-d   .
    END.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
first question, is it necessary for the transaction to be that big ?

meaning, either all the half million records are updated or nothing at all, incase the transaction fails or is aborted ?
 

MSN

New Member
first question, is it necessary for the transaction to be that big ?

meaning, either all the half million records are updated or nothing at all, incase the transaction fails or is aborted ?

Thks for you reply.


No it's not necessary for TRANSACTION, that's my problem. I don't know how to reduce this size for transaction (but total of updated record will be > 500.000). I use "TRANSACTION" keyword, procedure, and external program thinking it could reduce the transaction.. but It doesn't seem te be the rigth solution :(


If it could help you in understanding my procedure :

I'm "consolidating" (sorry I'm not sure it the right word in english) some packages of data, provided by several process. Each of this package has a identifier ( id1, id2 etc...) and I have to update them to have only one unique id.
 

jdgibson

New Member
The code

do while

causes the entire update to be one transaction.

If you change it to

repeat while

it will be one transaction for each iteration of the loop and you will no longer have the lock table overflow problem
 

Casper

ProgressTalk.com Moderator
Staff member
If you need to go through all the records of table why not do something like:
Code:
define buffer b-table for table.
For each table where <   > no-lock:
 
  casper:
  do transaction
      on error undo casper, retry casper
      on stop undo casper, retry casper
      on quit undo casper, retry casper:
 
      if retry
      then do:
        /* do your error handling stuff */
        leave casper.
      end.
 
      find b-table where recid(b-table) = recid(table) exclusive-lock
      no-error.
      if available b-table
      then do:
          /* do your stuff */
          release b-table.
      end.
  end. /Transaction */
end. /* for each table */

This way you have one transaction per update of the record.

HTH,

Casper.
 

Casper

ProgressTalk.com Moderator
Staff member
The code

do while

causes the entire update to be one transaction.

If you change it to

repeat while

it will be one transaction for each iteration of the loop and you will no longer have the lock table overflow problem

DO doesn't have an implicit transaction, repeat does.

So another other option :) would be:

Code:
DO WHILE AVAILABLE myTable TRANSACTION:
 
END.
Which also causes the transaction not to scope to the procedure block. (Which is what happens if you don't use the keyword TRANSACTION...)

Casper.
 

TomBascom

Curmudgeon
FYI -- If you take any of the good advice above and the compiler warns that there is already a transaction active then you have a scoping issue that will also need to be dealt with before your problem is completely cured.
 

bulklodd

Member
If you need to go through all the records of table why not do something like:
Code:
define buffer b-table for table.
For each table where <   > no-lock:
 
  casper:
  do transaction
      on error undo casper, retry casper
      on stop undo casper, retry casper
      on quit undo casper, retry casper:
Casper.

Why's it so complex? You can reduce amount of the nested blocks as follows

Code:
define buffer b-table for table.
casper:
For each table where <   > no-lock transaction
      on error undo casper, retry casper
      on stop undo casper, retry casper
      on quit undo casper, retry casper:
 

Casper

ProgressTalk.com Moderator
Staff member
True,

Just a matter of style I guess. In this case an operation seems to happen on every record. If that's not the case I think my method is more clear.

But ok, point taken. :awink:

Casper.
 

Casper

ProgressTalk.com Moderator
Staff member
:D <-- me lucky

Just wandered through the KB's and see some issues. Most are result of deep nested blocks. I always try to avoid 'overnesting' (if that's a word...)

But must admit, more issues then I imagened there where.....

Tx,

Casper.
 

Huushawdadi

New Member
Not sure if you've already taken care of your issue, but for large updates to a db, generally on a table by table basis, I prefer to have more updates than 1 per transaction.

This reduces the amount of writing to the image files, speeds up the application to the db and also will have a positive effect if you have replication running on your dbs. All in all it equates to a faster update.

This solution isn't always suitable and I also use the one Casper provided earlier as a more common method, but generally for smaller updates.

Anyway see the below (I wrote it from memory so I may have forgotten a little bit but the general idea is there. Go easy guys.) :

a) You can define your own limit on the number of updates per transaction. I generally choose between 1,000 and 5,000. Depends on size of db, the number of users accessing and number of tables I'm updating.

b) As Tom mentioned, be very careful of record and transaction scoping outside the "MainBlock" repeat block. If this is not your top level program or you have found a record not in a no-lock status outside the Mainblock then the transaction will scope to the procedure level or higher. It is best to not refer to any buffers referenced within the Mainblock outside the Mainblock.


Code:
[FONT=Courier New]def var li-MaxUpdates as integer no-undo initial 1000.[/FONT]
[FONT=Courier New]def var li-UpdateCtr  as integer no-undo initial 0.[/FONT]
 
 
MainBlock:
repeat:
 
    li-UpdateCtr = 0.
 
    find first record
        where < >
        exclusive-lock no-error.
 
    UpdateBlock:
    do while available record and li-UpdateCtr <= li-MaxUpdates :
 
        <Update record>
 
        li-UpdateCtr = li-UpdateCtr + 1.
        find next record
            where < >
            exclusive-lock no-error.
 
    end. /* UpdateBlock */
 
    if not available record then
        leave MainBlock.
 
end. /* MainBlock */

Enjoy.
 
Top