Startup parameters for progress database -- Optimization

The online backup/user count bug was in 10.2B01, fixed in SP02. The online backup/_License bug was from 10.2B SP01 to SP04 inclusive, fixed in SP05. So when you install 10.2B, make sure you install the latest (and last) service pack: SP08. At present Progress is saying that there will be no more service packs for 10.2B.

As Tom said, Version 10 is almost 5 years old. Version 11 is now almost 3 years old and is now quite stable. If I were you I would seriously consider moving to 11.x (the current version is 11.4) if you are able. And even if you can't move to it in the short term you should have a plan for getting there sooner rather than later.
Hi Rob,

I am planning mive to 10.2B08 and testing for 11.4 early next year.

Thanks a lot.
 
That's the 3rd reported -B value so far...

Not that hit ratio is the be all and end all of tuning - it isn't. But since you mention it:

A hit ratio of 90% is horrifically bad. You almost have to work at it to be that bad. That basically means that 1 in 5 record ops has to go to disk.

95% isn't horrific - it just sucks.

98% is "you aren't trying very hard but at least it isn't completely embarrassing".

99% and you're starting to get somewhere.

Also - the relationship between -B and hit ratio follows an inverse square rule. To move your hit ratio half way from where it is towards 100% you need to increase -B 4x. IOW if -B 35000 gets you a 90% hit ratio for a certain workload you need to increase it to 140000 to improve to 95%. 500000 should get you close to 98%.
Hi Tom,

Well, no performance issue reported as of now and could be due to SSD disks we are using. But my plan is there already to move in grdual steps. -N From 35000 to 50000 to 75000 to 100000 and so on until above 97%.

Thanks a lot for your information..
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
Note that in Linux you may have to change the kernel shared memory parameters to permit you to actually use that memory. Some Linux distros set them too low by default.

These are (default) values from a Linux box of mine:
Code:
# ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 250
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 32768
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384
This means only 8 GB of RAM is available, in chunks of 32 MB, and 4096 of those chunks in total, for database shared memory on a RHEL 6.3 box with 128 GB of RAM. Not very useful.

Upshot: you may need to change SHMMAX, SHMALL, and SHMMNI. See this KB article for more details.
 
Note that in Linux you may have to change the kernel shared memory parameters to permit you to actually use that memory. Some Linux distros set them too low by default.

These are (default) values from a Linux box of mine:
Code:
# ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 32768
max total shared memory (kbytes) = 8388608
min seg size (bytes) = 1

------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 250
semaphore max value = 32767

------ Messages: Limits --------
max queues system wide = 32768
max size of message (bytes) = 8192
default max size of queue (bytes) = 16384
This means only 8 GB of RAM is available, in chunks of 32 MB, and 4096 of those chunks in total, for database shared memory on a RHEL 6.3 box with 128 GB of RAM. Not very useful.

Upshot: you may need to change SHMMAX, SHMALL, and SHMMNI. See this KB article for more details.
Hi Rob,

Ok. Will consider it also.

Thanks.
 

TomBascom

Curmudgeon
Making small, gradual changes to -B is pointless.

As I said -- the effectiveness of -B follows an inverse square law. To see any noticeable impact you need to make very large changes. Double it, quadruple it. Multiply it by 10. Those are the sorts of changes that you need to make. Piddling little changes like going from 35000 to 50000 will result in any impact being drowned out in the "noise" of ordinary workload variation.
 
Making small, gradual changes to -B is pointless.

As I said -- the effectiveness of -B follows an inverse square law. To see any noticeable impact you need to make very large changes. Double it, quadruple it. Multiply it by 10. Those are the sorts of changes that you need to make. Piddling little changes like going from 35000 to 50000 will result in any impact being drowned out in the "noise" of ordinary workload variation.
Hi Tom,

The gradual change is to first see how it impacts. We do have some DBs with 99% hit with 35000 setting. I will definitely follow your suggestion.

Thanks a lot.
 
Hi Tom,

The gradual change is to first see how it impacts. We do have some DBs with 99% hit with 35000 setting. I will definitely follow your suggestion.

Thanks a lot.
Hi Tom,

We increased -B from 35000 to 50000 to one of 120GB database and the buffer cache hit increased from 90% to 95%. We will monitor for sometime and increase to 75000.

Thanks and Regards
 
Top