Question [SOLVED] Database .lg file - how and when to truncate.

Hello,

I would like to ask about the best way to truncate the .lg file.
My Progress is on version 11.6 and I stumbled across threads about the 2GB size limit, when truncating could fail if the file exceeds this limit:

Since I am on version before 11.7 there is only the OFFLINE option to do it.
When the database is cleanly shut down the size remains as it was, so is this the best time to truncate it with this command?
$ > prolog *.lg


Workaround: OFFLINE
a. Use OS utilities to archive the current database lg file for later reference
b. Instead of running PROLOG, delete the database lg file

The Progress article does not explicitly say I should shut down the database, but I assume that's what OFFLINE means?

Our system has been configured by our third party and they said nothing about this.

We only run "online" backups:
probkup online <database> backups/<database>

The problem is we do DR scenarios when we do the shut downs on the database, but it's been running like this for months.

This is the size of our transaction log (package is the name of the database):
[Server]$ ls -l *.lg
-rw-r--r--. 1 root root 1332097162 Feb 16 14:25 package.lg


Thanks,
Richard
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
Since I am on version before 11.7 there is only the OFFLINE option to do it.
I suggest you re-read the article. That wasn't my takeaway.

It says:
Upgrade to OpenEdge 10.2B and later, OpenEdge Service Packs 10.1C04 and 10.2A01 where database lg files exceeding 2GB can now be truncated online or offline with the PROLOG utility which has been made aware of 2 GB database lg files.
You can truncate a large db log online or offline in 10.2B or later; you are on 11.6.

There is however a regression in OpenEdge 11.7 with truncating 2GB log files with PROLOG, the workaround is listed in this Article 000029201 below.
There is a problem with doing so in 11.7, which doesn't affect you (yet).

This is the size of our transaction log (package is the name of the database):
[Server]$ ls -l *.lg
-rw-r--r--. 1 root root 1332097162 Feb 16 14:25 package.lg
That's 1.24 GB. Working with a log file that large is challenging, e.g. with a text editor or command line text tools. I suggest you do the following:
  • Create a log archive location, if you don't already have one.
  • Copy the current log to that location.
  • Truncate the log with prolog; you should be able to do it online. If it doesn't work for some reason, follow this approach:
  • Split your archived log into separate files of a more manageable size, say 100 - 200 MB each.
  • Create an automated process for archiving and truncating the database log so this problem doesn't happen in future.
Note that if you upgrade to 11.7 or later, you will have the benefit of not being on a retired release of OpenEdge, and also having database log archiving built into the platform, eliminating the need for a scripted process and the worry of truncating very large log files.
 

TomBascom

Curmudgeon
Situations vary but my baseline suggestion is to truncate the log file (online) weekly. There is rarely any reason to look back beyond last week and multi-year log files are a major pain in the neck to deal with. If you feel compelled to keep archived logs you can always make a copy just prior to truncation.

Run something like this once a week:

Code:
#!/bin/sh

DB=package

LGARCDIR=/logs
DT=`date +%W`

# change directory to where the database lives
#

cd /db

cp ${DB}.lg ${LGARCDIR}/${DB}.lg.${DT}
chmod 666 ${DB}.lic

# the .lic file grows slowly but it should be kept in sync with the .lg file
#

cp ${DB}.lic ${LGARCDIR}/${DB}.lic.${DT}
> ${DB}.lic

prolog ${DB} -online

The %W substitutes the week number - that way you always have a rolling 1 year history. I've never found a reason to go back more than a year.

[PedanticMode=On] these are not "transaction logs". These are message logs. They contain information about db startup, shutdowns, crashes, connections, logins, logouts, errors, warnings and so forth.

Transaction logs are the bi file and the ai files. The bi file is the "undo log", ai files are the "redo logs".
[PedanticMode=Off]
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
[PedanticMode=On] these are not "transaction logs". These are message logs. They contain information about db startup, shutdowns, crashes, connections, logins, logouts, errors, warnings and so forth.

Transaction logs are the bi file and the ai files. The bi file is the "undo log", ai files are the "redo logs".
[PedanticMode=Off]
[SuperPedanticMode=On]
"Transaction log" in an OpenEdge database refers to something other than BI and AI. Also, BI is the undo/redo log; it's used for both.
From Database Essentials:
Transaction log area
The transaction log area is required if two-phase commit is used. This area contains one or more
fixed-length extents with the .tn filename extension; variable-length extents are not allowed.
[SuperPedanticMode=Off]

Ducking for cover... :p
 
Hi All,

this all very useful, thank you.
So the .lg file is a simple text file unrelated to the database function? That's at least the best information yet.
And I don't have to shut the database down. The fact that the Progress article says "offline" seems to mislead on this occasion.
Thanks Rob.

I have a VM test box, so I will try this out.
prolog ${DB} -online

Thank you Tom for the ready script, I would have probably written one myself, but I appreciate it. It helps understanding this concept.

And another thing I learned today :cool:


R.
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
So the .lg file is a simple text file unrelated to the database function?
Yes. If it doesn't exist when the database starts, it will be created. If it is replaced with an empty log while the database is up, it will start writing to that new file. As the KB article said, just don't move the current log while the database is up.
 

lkeller

New Member
I know this is an old post, but wanted to ask, is it possible to keep X number of archived log files via the conmgr.properties or if the only way to manage the archived log files was through a script like the one Tom Bascom provided in an earlier post on this thread?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I know this is an old post, but wanted to ask, is it possible to keep X number of archived log files via the conmgr.properties or if the only way to manage the archived log files was through a script like the one Tom Bascom provided in an earlier post on this thread?
That depends on your OpenEdge version.

If you are on 11.7+ then database log file archiving is built into the platform. Look at the -lg* primary broker configuration parameters. If you are on Windows then those parameters can be added to your database configuration in conmgr.properties. (If you are not on Windows then I don't see a need to use conmgr.properties.)
https://docs.progress.com/bundle/openedge-new-info-1173/page/Log-file-archive-enable-lgArchiveEnable.html

If you are on an older release then write a script. And then upgrade. ;)
 
Top