Empty Temp-table

Kalan

Member
Hi,

I am using progress version 10.1C.

Is this below statement is applicable to 10.1C version. Because in 10.1C as well some times I am experiencing this issue. please suggest.

You cannot use the EMPTY-TEMP-TABLE() method or the EMPTY TEMP-TABLE
statement within a transaction unless that TEMP-TABLE has been defined
as a NO-UNDO TEMP-TABLE.
Doing so results in the following error:
Only NO-UNDO temp-tables may be emptied during an active transaction.
(6485)
 

Cringer

ProgressTalk.com Moderator
Staff member
Yes it's applicable. The solution is there in the message.

Code:
DEFINE TEMP-TABLE ttWork NO-UNDO
  FIELD...
 

Kalan

Member
Thanks. Currently I'm analysing the support issue and I suspect this might be the reason.
But while testing this in openedge version 10.1C procedure editor, within the transacion if we use empty-table(temp table which is defined without NO-UNDO), still its emptying the temp-table. Hence I am failed to prove empty temp-table is the reason. Bcoz prokb says this defect is in progress version 9 only. Is there any way to reproduce this issue in procedure editor?
 

Cringer

ProgressTalk.com Moderator
Staff member
Didn't realise it was v9 only, but you're right, the following works in v10, but not v9.

Code:
DEFINE TEMP-TABLE ttWork
  FIELD ObjID AS INT.


DO  TRANSACTION:
  EMPTY TEMP-TABLE ttWork. 
END.

Therefore I'm not sure what's going on.
 

Cringer

ProgressTalk.com Moderator
Staff member
Could it be this?
Status: Unverified

SYMPTOM(s):

Error raised when updating the database

Pathing in the trigger code for the database is incorrect however en
unexpected error is raised

Only no undo temp tables maybe emptied during an active transaction
(6485).

Using an AppServer / WebServer connection

CAUSE:

If propath is not set correctly and trigger code is not compiled then
the triggers will fail if there are references in the triggers to, for
example, .i files. This is expected though the 6485 error is
unexpected, expecting something like 293:
<file.i> was not found (293)


FIX:

No real fix to this. The 6485 was raised as the code was backing out
of a transaction when emptying a temp table after hitting the propath
problem. The log files for the Web Server / AppServer should show the
293 error but the 6485 may be confusing
 

Kalan

Member
Is there any chances for getting this issue in openedge when the transaction or internal program calls are large? Bcoz based on the data setup, I beleive this might be the reason for getting this in realtime application. But somehow I need to reproduce this :(
 

tamhas

ProgressTalk.com Sponsor
What possible use case would you have for emptying a TT in the middle of a transaction?
 

Stefan

Well-Known Member
What possible use case would you have for emptying a TT in the middle of a transaction?

Strange question. I would say endless. If for example some pricing module is performing calculations with temp-tables during processing, the actual processing should not need to care if the pricing module empties its temp-tables between lines.
 

tamhas

ProgressTalk.com Sponsor
Point being that transactions should be as small as possible. They should span the minimum work to insure database and business logic integrity. I have a hard time thinking of a case where I need to empty a temp-table in the middle of such processing. Emptying a temp-table is something one does after completing some processing or before starting new processing.
 

Stefan

Well-Known Member
Agreed on the transaction needing to be as small as possible, but the smallest transaction in our case of processing a sales invoice comprises of:
  1. sales invoice business logic
    1. call sales invoice history business logic
      1. create sales invoice history header
      2. create sales invoice history lines
      3. update item counters
      4. updating revenue statistics
    2. call sales journal business logic
      1. create sales journal entry header
      2. create sales journal entry lines
        1. call inter-company journal entry business logic
        2. call project transaction business logic
      3. create outstanding item
    3. delete sales invoice header
    4. delete sales invoice lines
I really do not want to care about if any of the business logic / calculation modules need temp-tables which need to be emptied to do their work.
 

tamhas

ProgressTalk.com Sponsor
The key item here is the last line. Yes, I have many examples in my old ERP system where lots of stuff needs to happen to multiple things as a part of a business transaction. I'm not entirely sure that it all needs to be a part of an actual transaction, see below, but for the moment lets assume that it does.

You don't say where the info for all of this comes from, but let's assume that there is some set of temp-tables, perhaps a PDS, that you fill and validate either based on user input or interaction with a data stream or whatever. Or, perhaps this is a previously persisted order that came from the database, was modified, and now needs to be put back. The logic to do that is going to be walking through, either using something like save-changes or explicitly creating buffers and doing buffer copy. OK, where in there do you need to empty any temp-tables? After you are done, when the transaction is over, then you may want to empty the TT to get a fresh start, but why during the transaction itself? I don't see anything in that logic which implies the need to empty a TT.

As for the transaction scope, there is a tendency for ABLers to equate business transaction with DB transaction. After all, this seems to be a natural way to insure atomic purity. The problem with this assumption in modern systems is that the parts of the business transaction may be on different servers. E.g., the basic transaction could be happening in an OP service, but updating the item information could be in an Inventory service and the AR parts might be in an AR service. There are technologies for distributed transactions, but that is just as bad an idea as stateful AppServers and locked agents. One may avoid inconsistencies in the data, but at the expense of driving the system to its knees. This leads to an architecture where there are async messages, exception messages, and rollback logic instead of relying on DB transactions. Point being, with reliable messaging, it is going to work as planned nearly all of the time so the exception handling will be very rarely invoked. It is not quite so tidy as wrapping it all in a DB transaction, but the advantages can be substantial.

I am reminded of a customer some years back who used to get a check once a month that would pay off as many as 7000 invoices (many branch stores). Because each invoice touched multiple tables, this meant a single transaction that touched something near 30,000 records ... all locked, of course ... which blew my idea of reasonable levels of -L. I handled this at the time by just bumping -L for that site, but in retrospect I realize that one could have restructured it into a restartable process and kept the transaction to a single invoice. In your logic above, a few flags would tell you whether or not you had completed a given step and one could then make each step its own DB transaction, but the process would be restartable if aborted in the middle.
 

Stefan

Well-Known Member
The key item here is the last line. Yes, I have many examples in my old ERP system where lots of stuff needs to happen to multiple things as a part of a business transaction. I'm not entirely sure that it all needs to be a part of an actual transaction, see below, but for the moment lets assume that it does.

You don't say where the info for all of this comes from, but let's assume that there is some set of temp-tables, perhaps a PDS, that you fill and validate either based on user input or interaction with a data stream or whatever. Or, perhaps this is a previously persisted order that came from the database, was modified, and now needs to be put back. The logic to do that is going to be walking through, either using something like save-changes or explicitly creating buffers and doing buffer copy. OK, where in there do you need to empty any temp-tables? After you are done, when the transaction is over, then you may want to empty the TT to get a fresh start, but why during the transaction itself? I don't see anything in that logic which implies the need to empty a TT.

Yes, the sales invoices have earlier entered the database via the business logic called via whatever source (user entry, import, web service, sales order processing). Processing then, among the other things listed above, transforms the sales invoice to the sales journal entry temp-tables to be passed to the sales journal entry business logic.

Sales invoice processing determines transaction control (one grouped sales invoice is a transaction), but has no control over what happens in sales journal entry business logic. Sales journal entry business logic has one responsibility and that is to validate a sales journal entry before writing them to the database. Sales journal entry business logic has various backup temp-tables to aid in calculation / provide multi-user checks / do what ever else. These temp-tables are only known to sales journal entry business logic and are cleaned out after every sales journal entry, therefore they must be emptied in sales journal entry. Sales journal entry does not care who is calling it or whether it is in a transaction.

As for the transaction scope, there is a tendency for ABLers to equate business transaction with DB transaction. After all, this seems to be a natural way to insure atomic purity. The problem with this assumption in modern systems is that the parts of the business transaction may be on different servers. E.g., the basic transaction could be happening in an OP service, but updating the item information could be in an Inventory service and the AR parts might be in an AR service. There are technologies for distributed transactions, but that is just as bad an idea as stateful AppServers and locked agents. One may avoid inconsistencies in the data, but at the expense of driving the system to its knees. This leads to an architecture where there are async messages, exception messages, and rollback logic instead of relying on DB transactions. Point being, with reliable messaging, it is going to work as planned nearly all of the time so the exception handling will be very rarely invoked. It is not quite so tidy as wrapping it all in a DB transaction, but the advantages can be substantial.

Reliable messaging and functional rollback do allow the business transaction to be easily chopped up into smaller pieces, but this does involve work. If your business transaction is on one server, this work which may get the software ready for something in the future will only introduce bugs, only has the benefit of getting you ready for something which may be useful in the future allowing for a smaller lock table.

I am reminded of a customer some years back who used to get a check once a month that would pay off as many as 7000 invoices (many branch stores). Because each invoice touched multiple tables, this meant a single transaction that touched something near 30,000 records ... all locked, of course ... which blew my idea of reasonable levels of -L. I handled this at the time by just bumping -L for that site, but in retrospect I realize that one could have restructured it into a restartable process and kept the transaction to a single invoice. In your logic above, a few flags would tell you whether or not you had completed a given step and one could then make each step its own DB transaction, but the process would be restartable if aborted in the middle.

Adding flags complicates the process. If I only the sales invoice (with lines) and the sales journal entry (with lines) were involved it would already be a mess:
  1. add flag to sales invoice indicating 'processed'
  2. add flag to sales invoice lines indicating 'processed'
  3. add flag to sales journal header indicating 'incomplete, being processed'
  4. add flag to sales journal lines indicating 'incomplete, being processed'
Sales invoice business needs a restart processing function / processing needs to be aware that when handling a partially processed sales invoice it needs to continue with the existing sales journal entry.
Sales journal business logic needs to block users from processing or doing anything else with this sales journal entry.
Reports need to exclude these 'incomplete' journal entries.

I say let the database do its job so that we do not have to compensate with extra lines of code.
 

tamhas

ProgressTalk.com Sponsor
These temp-tables are only known to sales journal entry business logic and are cleaned out after every sales journal entry, therefore they must be emptied in sales journal entry. Sales journal entry does not care who is calling it or whether it is in a transaction.

OK, but the question is, why is there a transaction open when the call is made. That is asking for transaction creep. I understand that there is a unit of work involving the sales journal which makes a sensible transaction, but emptying the TT is not a part of that unit of work. It is post transaction clean up.

Some of the vocabulary you are using fits nicely with OO. In OO terms, it is very questionable that one would want a transaction to spam an object boundary. Business transaction, yes, but not a database transaction.

Yes, bunging everything into one huge DB transaction is easier that parceling it out into pieces and providing restart and roll back logic, but easier is not the same thing as better. It may work today, when everything is on one machine, but then one is facing rewrite if there is more than one machine in the future. Or, one may be facing rewrite if other aspects change, e.g., suddenly finding a customer who was getting checks that paid off 7000 invoices. I wouldn't have thought of that as a design criterion before experiencing it.

And, easier is relative. Yes, banging out the first version might be faster if one just lumps everything into one big DB transaction, but it also means that if it fails, one has no idea where or why it failed. To know where and why, one has to instrument it ... at which point one has done a similar amount of work to cutting it up into pieces and providing tracking. If it fails, one then knows which particular piece failed and one can address that piece directly.

Moreover, subdividing into small units of work make those pieces reusable. Sales Journal entries are not only created from invoices. They can be created from AR transactions or from other subsystems. Package that piece as a small unit of work and it can be used everywhere without concern about transaction scope because the transaction is within the object.

Note that, if one follows an OERA model, the actual DB transaction occurs only in the Data Access layer, but the business transaction occurs in the Business Logic layer. The BL knows nothing about where the DA is going to persist the data or in what form. Therefore, it can't have a DB transaction open across all the updates. Yes, that means that it must be prepared to handle failures gracefully, but isn't that a good attribute of an application anyway? Wouldn't you rather have an application come back and tell you about a problem and provide a mechanism for fixing it than to simply have a giant transaction fail and roll back?
 

Stefan

Well-Known Member
First of all thanks for more food for thought.

I was looking for an example of temp-table clean up where it really is not important (which was the original eye brow raiser) and thought the business entity calling business entity would be a good clean example. It's turned out not to be, but very interesting nonetheless. I was thinking of another use case in which the validation of each field can call a function to set filters (communication between functional logic and system layer / framework logic), these filters are reset per business entity line and the business entity lines are part of the business entity header, but I think your reply would be that the database transaction comprises one sales invoice line, so temp-table cleanup can be done nicely after that.

All functional errors and database errors that are trapped (record created / modified / deleted by another user) are put on an error queue (persistent procedure with no-undo temp-table to store errors), fatal errors will result in rollback, but the cause of the error is known since the error queue survives rollback. So the user knows what went wrong where.

We do not follow OERA as such, our business logic per entity contains validation and data access - in theory it may be wonderful to be able to plug in another data access layer, in practice, we have found no use for it yet.
 

tamhas

ProgressTalk.com Sponsor
It isn't that I have a religious objection to emptying TT in transactions ... it frankly had never occurred to me before, but, having had the question come up, it seemed to me that it is the sort of action which shouldn't be a part of a transaction and I couldn't and can't think of a case where it should be. Admittedly, I'm also not sure if should be a problem and, were I the one with the issue, I would be asking Progress tech support if there was a good reason it worked that way.

Which said, I do think it is true that we as ABL programmers have gotten lazy about wrapping big balls of logic in a single transaction. 99.99% of the time it works fine because everything commits anyway. Periodically, one might notice side effects like large lock table requirements or growing BI files, but those tend to be rare rather than common experience. However, distributed deployment and cleanly separated data access layers tend to push us in the direction of small transactions with the logic of what to do in the event of failure becoming a business process. Undoing a big ball of transactions may seen the right thing because it ensures atomic relational integrity, but it may not be the optimum thing from a business process.

It is perhaps unfortunate that so much emphasis is often placed on the advantage of a DA layer in terms of changing data sources or locations ... which is a big plus, especially in distributed deployments ... because the key reason for a separate DA layer is maintainability.
 

kolonuk

Member
Is it just me, or could you just do
Code:
for each table
:
delete table.
end.

Whats the difference between EMPTY TEMP-TABLE and just deleting?
 

RealHeavyDude

Well-Known Member
There is a huge difference in performance when the TEMP-TABLE contains a lot of records. Instead of iterating through the records in the TEMP-TABLE ( FOR EACH ) the data is removed from memory in one fast operation.

Heavy Regards, RealHeavyDude.
 

philipped

New Member
Nice topic.

I was hired 5 years ago as a student in a Progress 9 environment. I had to learn Progress almost by myself. All of our custom programm are using the "For each Delete" mothod, not the EMPTY TEMP-TABLE method when temp table are used.

I think I will take some time to check those programs.
 

Cringer

ProgressTalk.com Moderator
Staff member
AFAIK empty tt was introduced in a relatively recent version of Progress, which would explain why you're finding that.
 

philipped

New Member
I think everything was developped under v9 where the statement is supported. But when consultants bring in code you can't be sure wich version where used to develop the program in first place...
 
Top