Please give me some advice about DB startup parameters.

rainylsh

Member
recently, users told me ERP system runs slowly sometimes.
Please give me some advice about DB startup parameters.
DB size is about 12G, OS:WIN2003 MEM:2G CPU:3G.
pf file:
-L 80000
-TB 4
-TM 15
-inp 5000
-rand 2
-B 5000
 

RealHeavyDude

Well-Known Member
AFAIK, -inp and -rand are not database startup parameters, they are only relevant for runtime sessions.

  • What do you mean by runs slowly sometimes?
  • Are there specific processes which have performance problems?
  • Are there specific users who complain about performance problems - what are they complaining about?
  • Are queries slow or do you have performance problems processing large amounts of data?
There is no way giving you a "magic" advise on database startup parameters - these are usually results from a performance tuning.

  • What else is running on the machine (virus scanner) - is it also a file server?
  • Do you have a workgroup or enterprise license?
  • If you are running a enterprise license, are the background writers (AIW, BIW and APWs) running?
  • If you are running a enterprise license, is your machine a single-CPU machine - what does CPU: 3G mean?
BUT:
  • Your -B (buffer pool) seems way too low. Even if the blocksize of your database is 8K (you give no information) it's only 40 MB.
  • Maybe you should have a look at your checkpoints - bi clustersize.
Just a few thoughts, RealHeavyDude.
 

Casper

ProgressTalk.com Moderator
Staff member
First of all: Always state your Progress version and any other relevant information regarding your issue.

The Number 1 cause of most performance problems have to do with bad coding.

As RHD already pointed out very clearly you need is to give lots more information then you have given us so far.

You should always ask yourself the questions RHD asked you. The answers will guide you to the root cause of the problem. Now you just say it is slow and for some reason you suspect that the parameters are the problem. That is in my opinion a bit to fast a conclusion.

Although it is obvious your parameters need tweeking too.

Casper.
 

rainylsh

Member
thank RHD and casper.
1.runs slowly, when openning a interface, or posting transaction, once in a while.
2.CPU:3G means 3.0GHz.
3.the machine is also a file server. I've doubt this, but can't make sure.
4.Progress help document gives the Multi-user Default (-B) size:(8 * users),"The users value is specified by the Number of Users (-n) parameter". -n was set 40, 5000 is larger than 8 * 40.
 

RealHeavyDude

Well-Known Member
Pls, don't get mad at me, BUT:
  1. "runs slowly, when openning a interface, or posting transaction, once in a while" is way to unspecific. Could be all or nothing.
  2. Interesting here is, is it a single or multi CPU machine and do you have an enterprise or workgroup license (enterprise license supports multi CPU whereas workgroup does not).
  3. I know it is common BAD, BAD, BAD practice having "everything" running on a windows server just because you can. If the ERP system has any value to the company it should be worth having it's own dedicated server.
  4. Most default values for Progress startup parameters are set extremely, conservatively low. My impression is, that the reason for that is, that it could run on any possible minimalistic hardware configuration underneath an operating system supported by Progress. Some of the startup parameters are good candidates for tweaking, most of them you'll probably never ever need. Which are the ones that are good candidates for tweaking and which of them you should leave alone is very specific to your system. The -B (buffer pool) is definately one for which the default value is way to low and it's very relevant for performance. My advise here is, have as much memory available as possible for the database but keep in mind that there other things running on a machine which will also compete for memory. If you have 2GB of RAM in the machine you need to check first how much it is available and then use most of it for your database. Just one note, the buffer pool is not the only thing of a Progress database that consumes memory.
HTH, RealHeavyDude.
 

TomBascom

Curmudgeon
You still haven't mentioned what version of Progress this is.

It would also be interesting to know if you have a "Workgroup" or an "Enterprise" Progress license?

Which "ERP" system is this? Some of them have well known performance issues which might be specifically addressed.

Your systems has 2GB of RAM. -B is merely 5000. With 4k db blocks that means you are using just 20MB of the 2GB. Start your tuning efforts by setting -B to something more reasonable like 100000.

There are many, many more possibilities but without more information there really isn't any way to say what the real problems are and how you might resolve them.
 

rainylsh

Member
I never think so.
1. run slowly... I thought it maybe because of the machine is running as a file server. when user copy or write a big file , the machine maybe run slowly...
2.CPU INFO:121401.JPG
3.We have an enterprise license.
4.talked with you, I want do 2 things. 1: leave alone the machine to be db server. 2:set the -B para to a big num, example 50000 with 8K db blocks.
5.progress versiong is 9.1
6.ERP system: Syteline 6
thanks you two again.
 

RealHeavyDude

Well-Known Member
Since you have an enterprise license:
  • I would set -B at least to 100.000 (as Tom already suggested) because with 8K blocksize you will consum 800 MB for your buffer pool leaving 1.2 GB of memory to the OS and other applications.
  • Make sure you have the background writers started against your database:
    • The Before Image Writer (you can start it with probiw <db-name>)
    • The After Image Writer (you can start it with proaiw <db-name>) - you have AI switched on, have you?
    • At least one Asynchronous Page Writer (you can start it with proapw <db-name>
  • If your machine is a multi-CPU machine you should switch on the spinlock mechanism for allocating CPU ressources. You can switch it on in supplying -spin n (where n could be 20.000 for a start) as startup parameter when starting your database.
  • Since you are going to increase the buffer pool massively you may also have an eye at your checkpoint intervall. If it becomes to short (less than 2 minutes) then you might increase your bi cluster size too.
HTH, RealHeavyDude.
 

Casper

ProgressTalk.com Moderator
Staff member
It is still unclear to me what is your performance problem. Does it happen with only 1 program or consistently.
Since copying large files slows down the application you might want to reconsider your disk layout and or usage/purpose of the server.

Although changing parameters might improve performance, I think you first should pinpoint your actual problem, then look for the reason why it happens and then act on this. Increasing -B is obvious and most likely will improve overall performance in general.

What I normally do is make sure I have enough disks to be able to manage the expected activity. (The activity van be measured through promon and system performance tols).

Dbanalys and promon output help me out in finding the (most likely) optimal databas structure. If needed I do a reload of the database to get the data in this structure. I have some rules of thumb for initial parameter settings which turn out to work quite nice. (important for measurement in later stage is setting the -tablebase -tablerangesize -indexbase and -indexrangesize parameters).

After this I keep monitoring the database and server for activity and performance. If a problem occurs it is quite ieasy to see what program is causing the issue and I can focus then on that.

Regards,

Casper.
 

TomBascom

Curmudgeon
Casper is quite right -- without a clear description of your problem this isn't likely to be a terribly productive exercise.

OTOH... as described improving your configuration is something like shooting fish in a barrel ;)

0) Upgrade. 9.1C is ancient, obsolete and unsupported.

1) Dramatically increase -B. 100000 is a good start. 200000 is not unreasonable.

2) Set -spin to something between 5000 and 20000. Personally I recommend the lower end of the spectrum for up to date releases. But you are running 9.1C which is ancient, obsolete and unsupported so a higher value may be ok.

3) Increase your bi cluster size. Start with:

proutil dbname -C truncate bi -bi 8192

4) Start a BIW and at least one APW.

5) You don't mention whether or not you are running after-imaging. You should be. Everyone with data that has value should be.

These are pretty basic. You can safely take these steps without knowing much at all about your application or the specific performance problem.
 

rainylsh

Member
1.We will move the file service to another machine.
2.I've set -B to 50000, but It's Bigger than the machine can support, This morning, I've check the machine's RAM used. there is more than 1 Service for progress because -Mn was set 5, It means 5 * 400M = 2Gb, It bigger than RAM.I've set it to 3
 

RealHeavyDude

Well-Known Member
Your assumption is wrong:

When you start the database or database server, you really start the database broker.

The buffer pool is allocated by the database broker only once and only by the database broker. In a previous post I mentioned that the buffer pool is not the only thing that a Progress database will have in memory.

The database broker's job is to take control over the shared memory, together with the background writers to keep the parts of the database which are in memory and the files on disk in sync and listen for connection requests from remote users.

The remote users then are served by the servers which are controlled by the broker. The -Mn parameter determines how many servers the broker can start. With the -Ma and -Mi parameters you can determine how many remote users will be served (-Ma maximum, -Mi minimum) by one server.

You need to add round about 5 to 10 MB per remote user that is connecte d to the database to the overall memory consumption your database will need. This would then be -Mn * -Ma * 10 ...

HTH, RealHeavyDude.
 

TomBascom

Curmudgeon
To underline what RHD said...

-B is shared. There is not 400MB allocated per user (or per server) and you cannot determine memory allocation by summing up the memory usage of each process.

50,000 is far better than 5,000 but it is still quite modest.
 

rainylsh

Member
But I don't understand, why there are 5 processes named _mprosrv.exe, each site use about 400M MEM in taskmgr. Or it just a virtual?
 

TomBascom

Curmudgeon
Task manager only shows the total that each process addresses. It does not distinguish between private memory and shared memory. So the shared portion is being counted 5x.
 

rainylsh

Member
Thank you two. I'm clearly know what i need to do.
I've set -B to 50000, will move the file service to another one, and want to try setting -B to a big one the machine can support.
 
Top