how to separate gst_audit table to another new database

mhtan88

Member
we have a problem on gst_audit table keep on growing until the size of the datbase very big, and we want to keep our database safe from unestimated growing....
can anyone tell me how to separate our gst_audit table to a new blank database?:confused::confused:
 

ron

Member
1. Create new GST DB.
2. Dump gst_audit table from existing DB.
3. Load the table into "GST" DB.
4. Re-index the table
5. Delete the table from the existing DB.

You also have to create a new schema for the new DB.

However -- are you sure you want to do this? It means you have two DBs to administer and if you're using After Imaging (which, of course, you should) -- you have an issue with synchronisation.

Are you using Progress 9 or 10? If you are then why not just move the table to separate Storage Area?

Ron.
 

mhtan88

Member
mm...... because i found out most of the gst_audit might corrupt the database, therefore, i want to separate it and if really happend, i can create a blank gst_audit. table without corrupted by icfdb

i'm using OpenEdge 10B03 and dynamics2.1A03 (with 9.1D09)
 

ron

Member
I understand what you're saying -- but, personally, I'd look for other ways to solve the problem.

What platform are you using? AIX, Solaris? Many platforms let you turn-on a "Large Files" option. If you do the same in Progress then you "can't" fill-up the database. The worst that'll happen is the last DB extent (variable) will keep getting bigger. But, so what?

Like I said, I'm not keen on the idea of having a second DB. I have a feeling that you'll just create more work/problems for yourself.

However, if you decide to go ahead and create another DB, let me know if you need any help with it.

Ron.
 

mhtan88

Member
thx.

currently, what i have set is, i separate out the schema area, which mean /dev/sdb store non gst area, but /dev/sdc store gst area. because as i can see that, icfdb 's transaction very high, and required higher speed.

i'm using Redhat8.0 and also Redhat Advance Server 4.0
 

ron

Member
Sorry, I'm not sure what your asking me.

As I said before, I think you're better-off putting your 'problem' table in a separate Area, rather than in a separate DB.

If you're planning on putting separate DBs on separate discs -- I see no reason why that will help if you want better performance. Is performance a problem? If it is and you want some advice from the forum -- you have to provide every small detail about your hardware and how you have Progress configured.

BTW, I have no idea whether Linux has a "Large Files" option. Maybe someone else can help?

Regards,
Ron.
 

Daved0331

New Member
I use a custom Audit Trail package for Windows-based Progress databases. The package was recently enhanced to also work with Unix-based Progress databases. It allows you to track changes to any Progress table and field. Provides table and field, audit date/time, terminal id/PC name, network user id, application user id, program stack, old/new values and more.

Let me know if you want more info...
 

mhtan88

Member
i have do what ever you said. but, when i start my icfdb login. it prompted me that gst_audit table was not found. even i have set another database connect below the <service> icfdb already. may be the source code that compiled by progress is using icfdb.gst_audit prefix? even like that, I try to change the source code and make it pointing to icfdbgst.gst_audit. but it still can't.
 

TomBascom

Curmudgeon
You might want to try using the -ld startup parameter to give your new db the same logical name as the db that the table used to be in.

But, really, moving the table out of the original database was probably a very bad idea. You should have taken Ron's advice and put it into a storage area rather than a distinct database.
 

mhtan88

Member
hi TomBascom, i think it can't because icfdb already connected using this logical name "icfdb". because of the openedge progress are already compiled using icfdb as logical name to the dot R file. therefore, i tried on changed the source code to point to icfdbgst, it works. but some of the static object file need to do some compilation. and also need to remove the records from ryc_relationship table 's record that contain gstad. currently still doing testing on any site effect or not.
 

mhtan88

Member
ok. actually at my environment, i did used different storage area like /dev/sda1 and gst area storaged in /dev/sdb1 but our concern is if gst_audit table too huge .....easy corrupt and slow. mmm.... headache (=,=).....
 

TomBascom

Curmudgeon
I don't see how putting the table in a separate database addresses your concerns in any way.

Nor do I understand why you think that there is some sort of risk of "corruption" (whatever that means -- the term "corruption" is very vague and means different things to different people).

As for slowness... If something gets slower as it gets bigger that means that it does not have an appropriate index to support the data access that you are requesting (or it means that you are not doing a very good job of requesting the right data). Putting it in a separate database isn't going to improve that situation.

/dev/sda1 and /dev/sdb1 are not storage areas. They are disks. A storage area is a Progress database construct. You create them by defining them in the .st file and assigning tables and indexes to them. You may need to do a dump & load or a tablemove if the table already exists.
 

mhtan88

Member
I have managed to separated out from icfdb to another database.
and need to modified some code. basically, all are tested in 2 tiers environment and N tiers environments. using webclient.
 
Top