Which tables are in a storage area?

Cringer

ProgressTalk.com Moderator
Staff member
Just wondering what I need to query in order to see which tables reside in each storage area? We've got one storage area that is growing very quickly and wanting to try and determine which tables so we can move them to a new area.
 

RealHeavyDude

Well-Known Member
For one you could run a database analysis which would produce such a report.

That would be
proutil path_to_your_database -C dbanalys > dbanalys.log
Note that you can run this off or online. When run online the report will contain a warning that it is run online and the statistics are only estimations. Furthermore you can use the report to see how much space each table is using and if you compare it with previous reports you can identify the growth for each table in a given period of time.

Does that make sense to you?

Heavy Regards, RealHeavyDude.
 

RealHeavyDude

Well-Known Member
If you are looking for a procedure which parses that file into temp-tables I might be able to help you - but I would not like to post it here.

Heavy Regards, RealHeavyDude.
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks RHD :) I'll have a look at that report and see what you can do.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I think you're better off querying _Area, _StorageObject, _File to build a list of tables in areas. Dbanalys output is a bit cumbersome for that, I find. I'll dig out some code if you're interested.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Tables in areas:
(credit where due: based on code from George Potemkin, via Gus Björklund)

Code:
FOR EACH _Area NO-LOCK WHERE _Area._Area-type EQ 6,  /* data area */
    EACH _StorageObject NO-LOCK
WHERE _StorageObject._Area-number EQ _Area._Area-number
AND   _StorageObject._Object-type = 1:               /* it's a table */
  FIND FIRST _File NO-LOCK
  WHERE _File._File-Number EQ _StorageObject._Object-number
  AND _File._Tbl-type = "T" NO-ERROR.                /* appl. table */
    IF AVAILABLE( _File ) THEN
    DISPLAY _Area._Area-name column-label "Area name"
            _File._File-Name column-label "Table name".
END.

Sample output:

Code:
Area name                        Table name
-------------------------------- -------------------
Info Area                        item
Info Area                        monthly
Info Area                        salesrep
Info Area                        shipping
Info Area                        state
Info Area                        syscontrol
Customer/Order Area              agedar
Customer/Order Area              customer
Customer/Order Area              order
Customer/Order Area              order-line
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
After re-reading your post, another thing occurs to me. If you have an area that is growing very quickly, take a look at the structure file (ensuring first that it's up to date). If the RPB of the area in question is too low you'll chew through blocks more quickly than you should as they'll be partly or even mostly empty. Worst-case scenario: you put a history/transaction table in an index area with RPB=1 in an 8K DB, every record will consume an 8K block!

As RHD said, this is where regular dbanalys/tabanalys outputs are very useful for noting growth over time.
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks for all the input. George Potemkin's code looks like the sort of thing I'm after initially.
 

Cringer

ProgressTalk.com Moderator
Staff member
How easy would it be to find out how many blocks each table is using in that area?
 

RealHeavyDude

Well-Known Member
AFAIK from the VST you only get the blocks that are used within the storage area but not for individual tables. For that you need to process the output of the dbanalys or tabanalys from the proutil command. Unfortunately that file is not designed to be processed automatically but it is possible and if you have OpenEdge management (formerly known as Fathom) than it includes a procedure that parses that file.

Heavy Regards, RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Given that the concern is finding the fast-growing table(s) in the area, compare the record counts from two successive dbanalys/tabanalys runs. Then you can focus on a particular table, or perhaps a few. Another way to go at this problem is to zero your stats and then use the _TableStat VST and look at record creates. The fast growers will bubble up to the top over time. This assumes you have first set the -tablerangesize DB startup parameter to a value large enough to show stats for all tables. If it is not set you only get stats for the first 50 tables.

Once you know which table it is, you still can't get its exact size on disk from tabanalys but you can roughly guesstimate it from the table's record count, mean record size, and area RPB. But the main point of the exercise is to identify which table it is so you can move it to its own area and manage it better. At that point you can use dbanalys to get its size in blocks.
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks Rob - good idea to look at record creates. That might be the easiest way to get a handle on things :)
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks rzr - I already have an application that monitors reads and so on - I hadn't thought to use it that way.

Sadly our DBs were restarted this morning and the -tablerangesize parameter was removed so now I'm going to have to wait until we can get that back on there!! Grr.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I find the -tablerangesize and -indexrangesize startup params to be very useful in locating application hotspots and indexing mistakes, so I put them on any production database I touch if they're not already there. The shared memory cost of doing so is pretty small.

It's a good idea to leave a little room for growth, i.e. set them higher than the actual table/index counts, so you don't have to keep updating them as you do schema changes (if you add tables or indexes often). Also remember to set the index count to at least (app. index count + 7), as the first seven indexes in the database belong to the meta-schema.

In the meantime, you could still check the stats and see if any numbers for creates are very high. Without -tablerangesize or -indexrangesize specified you still get stats for your first 50 tables and indexes, so you might get lucky.

And back to RHD's suggestion, until you get those startup params back on and restart your best bet is to run a couple of proutil tabanalys reports, say a day or two apart if you think that will reflect meaningful growth, and crunch the numbers in Excel to find which tables are growing. Since you know which area it is, you don't have to run the report for the whole DB; you can run it for that one area:
Code:
proutil [I]dbname[/I] -C tabanalys area [I]"area name"[/I] > [I]output_file.txt[/I]
 

Cringer

ProgressTalk.com Moderator
Staff member
Cheers Rob... I might just stick the params back on the startup script so that they're back the next time we restart ;)
 

TomBascom

Curmudgeon
I am hesitant to say anything that might discourage anyone from using these parameters -- they are extremely useful and, like Rob, I enable them everywhere.

None the less, there is a memory consideration to take into account: with the introduction of _userTableStat and _userIndexStat (10.1C?) memory use becomes ( -n * ( -tablerange + -indexrange )) * X where X is an estimator for however much a single entry in the *Stat* tables takes (probably around a hundred bytes). So on a 1,000 user system with 1,000 or so tables & indexes you are probably looking at a hundred megabytes or so... prior to _user*Stat the memory was indeed "small", but since then it is a bit more noteable.
 
Top