DB performance issue with table

Shanmugam T

New Member
Hello All,

Users facing performance issue during month-end with progress database having Type 1 storage areas(all).
So, i am planning to do a maintenance for tr_hist table(binary dump and load followed by index rebuild).

Table tr_hist is in 25th storage area and its indexes in 26th area.

Kindly advice me, Is it good to migrate only these two areas to Type 2 storage area during maintenance period?

Will it improve the overall performance of the database? rest all tables will be in Type 1 storage area.

Any other parameters should i consider before proceeding to Type 2 storage area?

Also advice me about records per blocks for tr_hist i should choose and others if any.

Thanks in advance.


Background information:
-----------------------

Progress Version: 10.1B03
OS: IBM AIX 6.1

DB size: 100GB
DB block size: 8192

Some DB Parameters:

-B 50000
-L 100000
-c 350
-directio


Table analysis report:

RECORD BLOCK SUMMARY FOR AREA "TRHIST" : 25
-------------------------------------------------------
-Record Size (B)- ---Fragments--- Scatter
Table Records Size Min Max Mean Count Factor Factor
PUB.tr_hist 49931806 14.7G 223 405 316 50160485 1.0 4.7

DB Structure file(Few lines):
-------------------------------------

d "TRHIST":25,64;1 /db1/db_25.d1 f 819200
d "TRHIST":25,64;1 /db1/db_25.d2 f 819200
d "TRHIST":25,64;1 /db1/db_25.d3 f 819200
d "TRHIST":25,64;1 /db1/db_25.d4 f 819200

d "TRHIST_IDX":26,32;1 /db/db_26.d1 f 400000
d "TRHIST_IDX":26,32;1 /db/db_26.d2 f 400000
d "TRHIST_IDX":26,32;1 /db/db_26.d3 f 400000
d "TRHIST_IDX":26,32;1 /db/db_26.d4 f 400000
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Hi Shanmugam,

You are on a very old version of OpenEdge. Version 10.1B shipped 8 years ago and is retired. Upgrading to at least 10.2B08 would provide you with performance benefits, bug fixes, and tuning options that you don't have today. It is also hugely faster at index rebuild than 10.1B. This change would also allow you to run the same r-code you use today. You could also consider upgrading to a modern release like 11.4, but that would be a more involved transition. Certainly possible and beneficial; there are just more things to consider and test when making that large a jump.

Are you on 32-bit or 64-bit OpenEdge?

Based on the sizes it appears you have not shown us all the extents in these areas. So does the lack of variable extents mean they are in the structure but not shown here, or do you not have any?

Is it good to migrate only these two areas to Type 2 storage area during maintenance period?
That depends entirely on the length of your maintenance period. If it's a couple of hours, you may not be able to dump and load the entire database. If it's 12 hours, you definitely can, even on old hardware and old OpenEdge. If you can't do it all at once, dump and load more tables in your next maintenance period. Repeat until all tables, indexes, and LOBs (if any) are in Type II storage. You should not have any application objects stored in Type I areas (and therefore, you shouldn't have any in the schema area).

You seem to have decided that tr_hist is your performance bottleneck. Do you have run-time statistics to tell you that this is where your application does most of its logical I/O? If you can't dump and load your entire database into Type II storage areas all at once, focus first on the tables and indexes you access the most. Set the -tablerangesize and -indexrangesize startup parameters and then query _TableStat and _IndexStat to find out.

Your lock table is much larger than the default. Whether this is an appropriate value depends on your application and your user count. A number that large may indicate that you have had lock table overflows in the past and that it was increased "temporarily" until the code could be fixed. Developers who write code like that often tend to be responsible for code that does far too much logical I/O and results in performance woes like the ones you describe.

Your buffer pool is very small (400 MB). It is a very small fraction of the size of your database. The smaller your buffer pool, the lower your ratio of logical to physical I/O. In other words, your application is probably spending a lot of time waiting on disk I/O. It is possible you could alleviate some of your performance issues by increasing it substantially, provided you are on 64-bit Enterprise RDBMS and have a decent amount of available RAM. Can you provide some hardware specifications (CPU/RAM/storage)?

You are using -directio. That might be good and it might be bad. Do you know why you are using it? Do you have test results with and without it that show that it is beneficial on your system with your application?

One of the keys to successful and efficient dump and load is testing and documentation. Practice on a similar system with a copy of your production database and the same OpenEdge version. Document your process, script it, log your output, document your results, and be sure to grep your logs for success messages (in the correct quantity), error messages, and lack of success messages. Repeat the testing until you get dump and load performance you are happy with. Be sure to run table analysis before and after on the source and target to ensure all record counts match. Don't use Data Dictionary ASCII dump/load or proutil bulkoad. Use binary dump and load with a separate proutil idxbuild afterwards.

There are different schools of thought on maximizing dump and load performance. Some people prefer to do it offline, sequentially, and others do it online with many tables running in parallel. Before you go too far down that road, see how your performance compares to the time you have available. If a simple approach completes easily within your maintenance window then you don't have to spend time on further tuning.

For index rebuild, it won't be blazing fast on 10.1B. I can't stress this enough: get to 11.3.3 or 10.2B08 and it will give you many more idxbuild tuning options to run much much more efficiently, perhaps 5x to 12x faster than 10.1B.

Also advice me about records per blocks for tr_hist i should choose and others if any.
Given the mean record size of this table and an 8 KB block size, 64 RPB is more than enough. If this is your largest, fastest-growing table, be sure to use a cluster size of 512 for this area. In general, use 512 blocks per cluster for areas containing fast-growing tables and indexes. Use 64 for other areas containing indexes. Use 8 for areas containing small, static, or empty tables. For a single-table area, use a high enough RPB to ensure your blocks are optimally filled. For a multi-table area, RPB of 128 is a good choice with 8 KB blocks.

Other basic storage design advice:
- Create an area for each large table.
- For each such table, create an area for its indexes.
- Create an area for each LOB column, if any.
- Create an area for tables that have define word indexes, if any (this will help you later on with being able to multi-thread index rebuilds).
- For a database of this size, it isn't beneficial to have many small fixed-size extents. You could just define one variable extent per area and it would perform fine. Although I prefer to have one fixed extent and one variable for the BI.
- Use variable-size AI extents, say 8 to 12 of them. (You have after imaging enabled in production, right?)

Other good storage area design guidance:
Storage Optimization Strategies, Tom Bascom
video: http://download.psdn.com/media/exchange_online_2010/1004TomBascom.wmv
slides: http://dbappraise.com/ppt/sos.pptx

So, will all this fix your application performance issues at month-end? Based on the information provided so far I have no idea. Maybe your other unmentioned parameters are badly set. Maybe your database storage is saturated. Maybe your clients are starved for CPU cycles and the database isn't the bottleneck at all. Maybe your BI cluster size is way too small and the database is spending all its time checkpointing. Maybe your queries are poorly bracketed. Maybe your code creates enormous disk-resident temp-tables. There are lots of ways an application can be slow and heroic database tuning can't make them all go away.

What I do know with certainty is that your database can be better structured than it is and better tuned than it is, and fixing these issues will reduce your physical I/O and make your remaining disk I/O and your caching more efficient. It will also prepare you better for the future. That's never a bad thing.
 

Cringer

ProgressTalk.com Moderator
Staff member
Another point is to make sure you set a cluster size of 64 for your index areas. I can point at a discussion about the reasons why if necessary, but suffice it to say it is advice from Rich Banville and Gus Björklund ;)
 

cj_brandt

Active Member
I'm guessing disk io is the cause of performance issues since the memory is so low and the db uses Type I storage areas. Do you have stats on what tables are read the most ?

Usually there are 5 to 10 tables which comprise a large percent of the total reads in the db. If some of those tables are in a Type I storage area with other tables, then I would look at getting those tables moved to a Type II storage area as my first action. Gradually tables in a Type I storage area have their records scattered across the blocks in the area. This requires more disk IO to read the records, which is probably a bottleneck on your db.
 

TheMadDBA

Active Member
tr_hist is a common performance issue with QAD installs. It is basically the center of the universe and most queries written against it are very poor. Especially in older versions like this. QAD also has issues with locking, hence the larger than normal -L size. I wouldn't decrease that until you are 100% sure it isn't going to cause overflows.

Since you already have tr_hist in its own area and the indexes in their own area I would not expect a major improvement moving to Type 2 areas. But since you need a dump and reload for tr_hist there is really no reason not to use Type 2.

Like CJ said I would expect you have bigger issues with the other tables in Type 1 areas since each database block contains records from multiple tables. Upgrading to 10.2B08 should be near the top of your list for all the reasons given before.

Fastest way to dump will most likely be a multi-threaded binary dump, followed by a single threaded binary load (one file at a time) and an idxbuild. On your version the idxbuild is going to stink because of all of the indexes on tr_hist.

Also read what Rob posted and take it to heart... he gave you a lot of information that will be very useful for you.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Also, on idxbuild (at least on a fast version like 10.2B08 or 11.3): sometimes the fastest way to rebuild indexes on a large table is to run it more than once. The key is to make as much fit in your -TF buffer as possible, i.e. do as little sort/merge block I/O on disk as possible.

Once I had a situation where I had to rebuild indexes on a table with 12 indexes of varying sizes; I think this was in 10.2B06 or maybe 07. Tweaking as much as possible I got it down to about 50 minutes. With experimentation (testing a build of each index individually and determining their sort file utilization) I got it down to about 35 minutes by running 6 separate idxbuilds in a script: four for individual indexes and two others for four indexes each. Even though I was repeating the data scan/key build phase of idxbuild six times, I more than made up for that by being able to do sort/merge phases with no disk I/O.
 

TomBascom

Curmudgeon
As an objective you want to eventually get *all* of your data to type 2 storage areas. You may not be able to do that in one dump & load but that should be your target.
 

Shanmugam T

New Member
Hi Rob,

Thanks for your immediate response.

I have provided more information which may help us to understand the problem more in detail.

1. Architecture

Progress: 64 bit OpenEdge 10.1B
OS : IBM AIX 6.1
CPU : 14 nos (Virtual)
RAM : 50 GB
Storage : IBM SVC

When performance issue occured(month-end), i could able to see 95% of total 14 CPU was utilized by user process, 4% by kernel and 0.1% by wait. On normal day CPU utilization is below 60% for user process (max 6 CPU utilized). above information derived from topas command.

2. Storage area structure for tr_hist table and indexes

Last month, i moved trhist storage area to different hard disk to reduce I/O on one particular HD. the complete db was residing on single HD before.
Also increased -B to 76800, still not sufficient :( .

Attached in a file.
---------------------------------------------------

3. Dump and Load

I performed a test binary dump & load and index rebuild for tr_hist table it took almost 10 hours time with below

parameters.

$DLC/bin/proutil db -C dump tr_hist <somepath>
$DLC/bin/proutil db -C truncate area TRHIST
$DLC/bin/proutil db -C load <somepath>/tr_hist.bd
$DLC/bin/proutil db -C truncate bi
$DLC/bin/proutil db -C idxbuild table tr_hist -B 10240 -TB 31 -TM 32

Dump and Load carried out in multi user mode with value -B 76800 with 1 BIW and 2 APWS
Index rebuild in single user mode(DB offline).

Suggest me parameters to increase the performance of D & L activity

---------------------------------------------------

4. Lock parameter value (-L) not sure why and when it is set to 100000.

5. Buffer pool


How much i should set -B ?. i know ideally it should be 10% of DB size.

But promon shows buffer hits is 99%. Hence we didnt increase -B. Is that OK?

Pasted PROMON DB activity summary

---------------------------------------------------------------------------------------
Activity - Sampled at XX/XX/14 XX:XX for 101:16:42.

Event Total Per Sec Event Total Per Sec
Commits 26410242 72.4 Undos 120389 0.3
Record Updates 15450866 42.4 Record Reads 52053329488 142767.5
Record Creates 5291981 14.5 Record Deletes 3359282 9.2
DB Writes 4027504 11.0 DB Reads 890774508 2443.1
BI Writes 1693205 4.6 BI Reads 651900 1.8
AI Writes 1152434 3.2
Record Locks 155508677 426.5 Record Waits 796 0.00
Checkpoints 1033 0.00 Buffs Flushed 6061 0.0

Rec Lock Waits 0 % BI Buf Waits 0 % AI Buf Waits 0 %
Writes by APW 100 % Writes by BIW 49 % Writes by AIW 92 %
Buffer Hits 99 %
DB Size 122 GB BI Size 5468 MB AI Size 39 MB
FR chain 10269 blocks RM chain 878921 blocks
Shared Memory 694092K Segments 1

4 Servers, 405 Users (404 Local, 1 Remote, 0 Batch),5 Apws
---------------------------------------------------------------------------------------

6. Don't have test results for -directio.

7. AI and BI area.

BI extent 14 fixed size 400MB (.b1 to .b14) and one variable BI extent (.b15)

AI extent with fixed size 50MB - 40 nos (.a1 to .a40)

---------------------------------------------------------------------------------------
8. PROMON


Shared Resources:
Busy After Image Extent : /ai/db.a8
Number of database buffers (-B): 76800
Number of before image buffers (-bibufs): 75
Number of after image buffers (-aibufs): 150
Before-image truncate interval (-G): 0
No crash protection (-i): Not enabled
Maximum private buffers per user (-Bpmax): 64
Current size of locking table (-L): 100000
Locking table entries in use: 803
Locking table high water mark: 14583
Maximum number of clients per server (-Ma): 10
Max number of JTA transactions (-maxxids): 0
Delay of before-image flush (-Mf): 3
Maximum number of servers (-Mn): 10
Maximum number of users (-n): 1301
Before-image file I/O (-r -R): Raw
Shared memory version number: 64010174
Number of semaphores used: 1319
Broker status: Executing
BI Writer status: Executing
AI Writer status: Executing
Watchdog status: Executing
Number of page writers: 5
Number of self-service users: 410
Number of remote users: 0
Number of servers: 4
Number of shut-downs: 0
Number of monitors: 1


DB Status:
-----------
Database Status:
Database version number: 8342
Database state: Open (1)
Database damaged flags: None (0)
Integrity flags: None (1536)
Database block size (bytes): 8192
Total number of database blocks: 16044535
Database blocks high water mark: 11167500
Free blocks below highwater mark: 10243
Record blocks with free space: 879005
Before image block size (bytes): 16384
Before image cluster size (kb): 16384
After image block size (bytes): 16384
Last transaction number: 1486630094
Highest file number defined: 0
Database created (multi-volume): 09/xx/12 19:44
Most recent database open: 12/xx/14 02:24
Previous database open: 12/xx/14 02:24
Local schema cache time stamp: 07/xx/14 11:41
Most recent .bi file open: 12/xx/14 02:24

We dont have -tablerangesize and -indexrangesize startup parameters set for database. I pasted below output from _TableSTAT VST table. DB was re-started four days ago.
 

Attachments

  • structure.txt
    3.6 KB · Views: 2

Shanmugam T

New Member
Unable to find tr_hist stats information in table stats VST :(
 

Attachments

  • FILE_IO_12.txt
    41.3 KB · Views: 2
  • tableio.txt
    75.8 KB · Views: 6
  • indexio.txt
    5.1 KB · Views: 2

Cringer

ProgressTalk.com Moderator
Staff member
That would suggest -indexrangesize and -tablerangesize parameters are set incorrectly for the number of indexes/tables you actually have.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
When performance issue occured(month-end), i could able to see 95% of total 14 CPU was utilized by user process, 4% by kernel and 0.1% by wait.
Your first step in that case is to determine which processes are using CPU. If they are client processes, use proGetStack or Client Statement Caching (or just ask the user) to determine what code they are running.
3. Dump and Load
I performed a test binary dump & load and index rebuild for tr_hist table it took almost 10 hours time with below parameters.
$DLC/bin/proutil db -C dump tr_hist <somepath>
$DLC/bin/proutil db -C truncate area TRHIST
$DLC/bin/proutil db -C load <somepath>/tr_hist.bd
$DLC/bin/proutil db -C truncate bi
$DLC/bin/proutil db -C idxbuild table tr_hist -B 10240 -TB 31 -TM 32
Dump and Load carried out in multi user mode with value -B 76800 with 1 BIW and 2 APWS
Index rebuild in single user mode(DB offline).
Suggest me parameters to increase the performance of D & L activity
You really will get much much better performance by upgrading your licenses to 10.2B08. Is this possible? If not, talk to your vendor and get a set of 10.2B or 11.3 30-day evaluation licenses for this testing.

I assume you aren't doing this testing on your prod box. What are the specs of the test machine? Do they match prod? Does it have any other workload?

You appear to be dumping and loading in-place with the same database. This is going to slow you down, because all the I/O is on the same disks, and it introduces risk. If you can, create a copy of the database to be your load target. Dump your files to a separate disk so your reads and writes aren't happening in the same place. E.g.: dump from DB on disk A to dump dir on disk Z. Then load from disk Z to DB on disk A (or spread across A & B, etc.).

Dump:
Option 1: try a multithreaded dump of tr_hist
Option 2: try proutil dump with -RO
Option 3: try dumping with a different index (note this changes the order in which data is loaded to the target)
Also, try running proutil idxcompact on your source DB before you dump. It may help performance.

Load:
Option 1: try an offline load
Option 2: try an online load with something like:
rfutil db -C aimage end
proserve db -B 500000 -spin 5000 -bibufs 500 -i
probiw db
proapw db (x 4)
proutil db -C load tr_hist.bd (if you dumped multi-threaded, use the -dumplist option)
After the load, shut down immediately because you started the DB with no-integrity. You must take a full backup before running idxbuild.

Idxbuild:
How many indexes are there on tr_hist?

10.1B:
You don't have many tuning options. Set -TB and -TM to max. Set -SG at least as high as the number of indexes on your table. Default is 48; it doesn't matter if you set it too high. Use -T to write the temporary files to a non-database disk if at all possible. Better, use -SS with a sort specification file to specify multiple directories on different disks to spread out the I/O load.

10.2B08/11.3.3:
proutil db -C idxbuild table tr_hist -B 1024 -TB 64 -TM 32 -SG 64 -TF 80 -SS sortfile.srt -threadnum 2 -mergethreads 4 -pfactor 80 -rusage > idxbuild.log
Answer "y" when asked if you have enough room for sorting.

Depending on the number of indexes you're building, you can experiment with different values of mergethreads and threadnum to see how they impact performance. If you go too high remember you will use a very large number of file handles as every thread opens every file in the database. You will want to run this as root, or adjust ulimit accordingly. If idxbuild crashes, there is no crash recovery as it does not use the BI file. You must restore from your backup.

As I said before, if you get to use a modern release of OE, experiment with multiple consecutive runs of idxbuild instead of one to rebuild all the indexes. The key is getting your sort/merge blocks entirely memory-resident. The -rusage option causes two message to be added to the output at the end of each area.
They look like this:
Code:
Temporary sort file at /temp1/dir1/ used up 0K of disk space. (11480)
Temporary sort file at /temp2/dir1/ used up 0K of disk space. (11480)
A total of 0K of temporary sort disk space was used for area 6. (11483)

Temporary sort file at /temp1/dir1/ used up 8724096K of disk space. (11480)  
Temporary sort file at /temp2/dir1/ used up 7943552K of disk space. (11480)  
A total of 16667648K of temporary sort disk space was used for area 9. (11483)
In this example, the first area looks good: no writes to the temp files on disk. For the rebuilds in the second area, we did write 16 GB of data to the two directories specified in the -SS file, so there is room for optimization.

Remember to re-enable after imaging after the index rebuild! Then take another backup.
4. Lock parameter value (-L) not sure why and when it is set to 100000.
Promon shows us the why: your lock table HWM is currently 14583, above the default of 8192. In past starts of the DB it may have been even higher.
Progress: 64 bit OpenEdge 10.1B
OS : IBM AIX 6.1
CPU : 14 nos (Virtual)
RAM : 50 GB
Storage : IBM SVC
Also increased -B to 76800, still not sufficient :( .
With 64-bit and 50 G B of RAM there is no reason to be using such a tiny buffer pool, unless this is a shared server with some workload other than this database and its clients. Small changes in -B are unlikely to produce large changes in application performance. Do you know how much RAM you have available once you factor in the memory used by your servers and clients?
5. Buffer pool
How much i should set -B ?. i know ideally it should be 10% of DB size.
I disagree. The job of -B is to cache your most frequently-read data so it can be accessed quickly by avoiding disk I/O. 10% of DB size would be ideal if you are frequently reading a subset of your data that is 1/10 the size of your DB. You want to size -B so that it is large enough to contain the "working set" of your application, the data is reads most often. This can be challenging to determine as there are no metrics to measure that exactly, and it can and will change over time based on changes in your data, application code, and your users' behaviour.

You don't want to set -B so high that you start swapping, as that will kill performance. You don't want to set it higher than the size of your DB as that's a guaranteed waste of RAM. In your case that's not a worry as your DB is larger than total RAM. In short, you want to set it as high as you can without causing memory starvation in other processes. Then let the application run for a while and keep an eye on empty buffers (promon R&D 1 7). If, after an extended period of time you still have empty buffers, that means no block read into the buffer pool since the DB started has ever been evicted. So any data that is being re-read is being read from memory, not disk.
But promon shows buffer hits is 99%. Hence we didnt increase -B. Is that OK?
No. If a lousy programmer writes a bad program that scans a 100-record reference table 10 billion times per day, guess what happens to your buffer hits? 100%! Does that mean your application is performing well? No. A low buffer hit ratio generally means bad performance. But the converse is not true: a high buffer hit ratio is not an indicator of good performance. It really doesn't tell you much, except that your ratio isn't low.
6. Don't have test results for -directio.
Then I think you should seriously question why you are using it. If you have a test machine whose specs match production, and if you are able to reproduce your month-end problems in test, do so with and without -directio and see if you can measure a difference (assuming you can suitably reproduce the conditions in prod).
7. AI and BI area.
BI extent 14 fixed size 400MB (.b1 to .b14) and one variable BI extent (.b15)
AI extent with fixed size 50MB - 40 nos (.a1 to .a40)
You're using AI; that's good. You have lots of fixed-size extents and if they all fill your DB will stall or crash, so I hope you have good monitoring in place for AI extent switches and empty AI extents.

You have lots of tiny extents in your data and BI areas. This means you are consuming a lot more file handles than you need to, and it doesn't help your performance.
8. PROMON
Shared Resources:
Your bibufs is low, given your size and user count. It isn't expensive to increase them. If promon shows empty BI buffer waits, increase -bibufs; say, 300. Set -aibufs to the same value.

You are configured for 100 remote users. One screen above shows one remote user connected, the other shows none, and only 4 servers running. If -Mi is at the default of 1 that means that at most 4 remote users connected since the DB started. Do you really need to be able to connect 100 remote users? This won't necessarily help with performance; just curious.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Unable to find tr_hist stats information in table stats VST :(
That's because -tablerangesize is set too low for your schema. And -indexrangesize isn't set at all, as you only have stats for 50 indexes. To determine appropriate parameter values, try this:
Code:
find last _file no-lock where _file._tbl-type = "T" use-index _file-number.
find last _index no-lock where not _index._index-name begins "_" use-index _index-number.
display _file-number _idx-num.
Once you restart the DB you will have access to CRUD stats for all of your tables and indexes. Then you will be able to sort those stats by reads and see which tables and indexes your application accesses most.
 

TheMadDBA

Active Member
What everybody else said about -tablerangesize and -indexrangesize... also make sure you do this for every database if you are running QAD in multiple databases.

Try the binary dump like this:
$DLC/bin/proutil db -C dump tr_hist <somepath> -thread 1 -threadnum 16

That will run up to 16 threads to dump the tr_hist table... how many depends on the actual index being used and some internal Progress logic. It will create X number of files and you just load those in order.

I have never been able to get a single threaded load to run as fast as a properly tuned multi-user mode either using -i and lots of APWs or without -i and lots of APWs and the BI cluster size set pretty high.

Like Rob said idxbuild is going to be a problem in your version... you are probably better off building fewer indexes at a time since there are sooooo many on tr_hist.

You should also look into upgrading QAD. They have made improvements over the years on some of the core functions/reports. It isn't perfect but it does get better.

If you can't update and have source code... start fixing those ugly where clauses :)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
And about the load, I should have also mentioned truncating the BI file: use the maximum cluster size (262128 KB), otherwise BI could be a bottleneck. Set it back to your 16 MB value when your load is done.
 

Shanmugam T

New Member
No problem. Let us know how it goes with your dump and load and your month-end performance.
I could not able to get downtime. However i increased -B value to 250000. Can you please tell me why modified buffers showing negative (-) value.

Total buffers: 250002
Hash table size: 70289
Used buffers: 250002
Empty buffers: 0
On lru chain: 250001
On apw queue: 0
On ckp queue: 318
Modified buffers: -343
Marked for ckp: 318
Last checkpoint number: 281
 

Shanmugam T

New Member
Should i increase -B value further?
I checked in promon ->R&D ->1 -> 14.
There was only 10MB free segments.
I checked with users they ran some report which took same time before and after modifying -B value (50000 to 250000).

please suggest me some options to improve db performance especially to run month end reporting programs.
 
Top