If you are thinking that grouping deletions into a block and committing the set of deletions as one large transaction rather than 1000 small transactions will perform better then I have bad news for you. That technique works well for CREATE and UPDATE but it does not improve DELETE performance. Or at least it didn't the last few dozen times that I tested it.
Having said that, my default approach to deleting large number of records goes something like this:
1) Identify a starting point and an ending point for the deletion. Ensure that there is an index to efficiently support querying the target set of data.
2) Verify that this process can be done piecemeal. In other words - it is ok to do it in 2 or 3 or 400 passes. It does not need to be "all or nothing" (like a single giant db transaction). It is ok to restart the purge later after something goes wrong or you have to abort it because it did not finish in the expected time.
3) If the business says "no restarts" you will probably need to get an emergency budget requisition approved to upgrade your hardware in order to meet their demands.
Now it is (relatively) easy:
Code:
output to value( "purge.flg" ).
output close.
for each order where orderNum > 0 and orderNum < 1000000:
for each orderLine where orderLine.orderNum = order.orderNum:
delete orderLine.
end.
delete order.
file-info:file-name = "purge.flg".
if file-info:full-pathname = ? then quit.
end.
message "finished!".
pause.
You can run this until it completes. Stop it easily by removing "purge.flg" and restart it any time you'd like.
But that may not be a good fit for your unknown requirements.