Tuning Question

bubbawny

New Member
Hi Folks!

I'm a new guy here! I've recently started a position with a firm that uses Syteline from Mapics. This application presently is utilizing Progress 9.1C.

I'm IT Manager/Sr. Systems Analyst for the firm. It was brought to my attention that our DB server was running slow. I've since run several weeks of PERFMONs watching memory, disk, processor, and network counters. There's a heavy amount of paging happening which is also causing a high level of disk I/O.

We hired one of the folks from Mapics to perform an assessment of our server. Unfortunately, it seems to have confused us.

We originall had 2GB of RAM on a W2K Server SP4 box. The LOG files and the DB files are on Separate Raid 1+0 drives. The system page file (swap) is on it's own physical disk. The system originally had (2) processors.

When I ran my PERFMONs, it appeared that it was Disk I/O and or Memory that was a problem. Processor utilization never peaked beyond any indications that they were over worked.

The place I work at has a progress / syteline programmer who is also the DBA. He's personally been in charge of this box. Since I've started with the firm, I've been given shared responsiblity to help improve performance.

the programmer recommended adding more processors even though the PERFMONs indicated no issues. We now have (4) intel processors on the box with no appreciable gains in performance.

The system was also upped to 4GB of RAM. ANd here's where it gets interesting. The Syteline tech who performed an assessment said we have our -B parameter too low. He recommended setting it up from where it was (40,000) to 300,000. Prior to making this change, allocated memory to the server db processes hovered at 1.2 gb of committed RAM. After adding the additional 2gb of physical RAM, the committed RAM did not change.

We recently altered the -B parameter incrementally over several days. When we hit 175,000, we got an error stating that the progress DB could not spawn another broker. We've since moved the -B down to 125,000.

The O/S tools (perfmon, pslist -M) show that more memory has been allocated out but we seem to have no indications of any real gains with performance improvements.

I'm sorry to mention thsi here, but my personal background does not include tuning of progress dbs. so i'm truly at a loss on how i can help improve our DB performance.

By moving the -B parameter from 40k to 125k, we've seen less free available ram sitting on the table and the progress server processes seem to be utilizing more RAM considerably. Yet there is no appreciable gains in performance.

So: From a newbie perspective, what can I look into to determine how to properly tune the DB? What settings should we be making that might be recommended? Are there any tuning white papers for progress?

Lastly, are there any tools built in to 9.1C that we can use to perform DB baselines of transactional performance so that we can monitor the effect of the changes to see if there are any net real world gains?

if there's any one would could help this newbie, i'd be greatful! my email is listed and open for any direct contact.

Warmest regards to you all!
 

jongpau

Member
There is a tool with Progress called promon, which can show you all sorts of information about your database and what's happening.

To start it, just go into a shell, cd to your database directory and type:
promon databasename

There is also information on performance tuning in the following pdf file (installation and configuration guide):

http://www.progress.com/progress_software/products/documentation/start/docs/getstart/icu/icu.pdf

The following is a link to the entire collection of books online:
http://www.progress.com/products/documentation/start/index.ssp


Also, with your Progress product cds, you *should* have also received one or more electronic documentation disc(s). These contain the entire collection of books belonging to your Progress version, so installing those is highly recommended. Within those edocs you should be able to find what you are after.
 
Promon is also accessible in more user-friendly Windows version via Protools directory.

There is also Tom Bascom's (free) Protop (Greenfield Technologies, I think).


These two utilities access the VSTs (hidden tables) to get their statistics.


The progress docs online at Progress Software will give you a lot of info, as suggested.

There is a book devoted to Progress tuning by Dan Foreman - I've never read it though, so can't comment on it:

http://www.bravepoint.com/products/book_performance.html


The dba forum at www.Peg.com is the best place to ask further questions, as well as here.


Lee
 

ranger101

New Member
long post, i didn't read it all

but take a look at -Bp ( maybe also -Bpmax )

in your progress documentations

startup command and parameter reference


see you all in peg.com

the only place to be !
 

balebob

New Member
Tunning -B parameter alone will give you a little bit of gain on your db performance. First of all, you have to know the % of database buffer hit. Here is the little formular that you can use:

% DB buffer hit = [(db_requested - db_read)]/db_requested x 100

Where,
db_requested and db_read can be found in 'promon', menu 3, menu 1

If your % DB buffer hit is below 20%, your db buffer is overflow. You might want to increased the -B. If your %DB buffer hit is above 90%, your -B is okay.

In addition to the -B paramter, you can also optimize buffer management by running APW - Asynchronous Page Writer. You can also optimize your db BI I/O by moving it to own location with its own I/O conntroller if possible. Take a look at -bibufs paramenter as well. You can also increase the BI block size which allow Progress to read and write more data at one time. You might as well increasing BI cluster size so that the interval between checkpoint does not happen often which in turn allow your APW to do its work efficiently. And add a BIW - BI Writer to your db.

BTW, did your grow your BI to a specific size? Sometime, this often will help alot if you allocated a certain space for the BI in your db. Otherwise, when your last BI cluster is filled, Progress will stop all activities and grow the new BI cluster which in turn will slow your system down.

On top of that, you can also delaying BI writes using -Mf parameter.

Hope these will give you some idea.

Good Luck!
Bob
 

Macery

New Member
I currently have been dealing with the same type of issues as you. I have done many of the same steps and still could not see much in the way "noticable" improvements. That made me take my focus away from looking for "server" side tuning. I am now looking through our app and discovering that the code has not been written in way that utilize proper indexes, or we just don't have the indexes setup in a way that matches our business logic.

One thing you can do to try to see if you are having the same issues there, in promon on the activities screen, get a baseline sample of your average records read / db reads then while the users are experiencing the "slow downs" check your current numbers. In my case what I was seeing were as much as 50,000 records being read per second for simple procedures which told me the server couldn't find proper indexes and instead went to scanning every record in the tables it was using just to find a recordsets with less than 10 items.
 

balebob

New Member
Totally agree with Macery. Maintenance indexes should be a part of DBA weekly work. Just in case, DBA is not already doing indexes maintenance... Basically there are two problems with indexes: corruption and underutilization. There are a couple of tools that allow you to analyze, fix, compact and build your index. Take a look at DB Admin book from Progress.
 

TomBascom

Curmudgeon
balebob said:
If your % DB buffer hit is below 20%, your db buffer is overflow. You might want to increased the -B. If your %DB buffer hit is above 90%, your -B is okay.
As a rule of thumb a hit rate better than 98% is probably "ok". Anything less is "pretty bad".

But tuning to the hit ratio is a mistake. It is very easy to improve your hit ratio -- just read a very small table over and over and over in a loop:

Code:
do while true:
  for each small_table no-lock: end.
end.
Your hit ratio will improve dramatically...

The object of tuning the buffer cache is to reduce disk IO operations to a level that makes sense for your disk subsystem. As a rule of thumb this is probably about 50 IO ops per second. Remember -- these are random IO operations so those lovely sequential IO throughput rates that the manufacturer pitched don't apply.

Tuning -B in small increments is a waste of time. It follows an inverse square rule -- to halve IO operations you need to quadruple -B (roughly, it's more complicated than that and every database is different but this is the general idea). http://www.greenfieldtech.com/articles/protop.shtml has a nice feature called the BigB Guesstimator which shows how changes to your -B might be expected to impact disk IO operations and hit ratios if your logical IO rate stays constant.

But not all performance problems are -B problems. In fact most aren't. Most are coding problems related to poor data selection. A lot of times you can "throw hardware" at them and fix them (actually that's cheaper a lot of times) and sometimes you have to fix the code.
 

BONO

Member
Hello,
U've Wrote "The LOG files and the DB files are on Separate Raid 1+0 drives."
Does it mean log file on a disk and db file on another ?
RAID1 is good configuration for DB performance.
The better performance we've improve, it's with the use of specific data area and indexes stores in different disk system. Determine wich table have to be with wich other table in a specific area is not simple and must be analyzed.
I think that Bi file have to be on a separate disk than other area. Then
data and indexes area have to be on separate disk.

Data area can be created by record size, then u can regroup table with same record size to optimize data exchange.

And after u can try tuning parameter of the database. I think that if u have to resolve the performance problem, u've to begin by the basics.
Hardware configuration and instalaltion and then tuning parameter.
 
My main job role has always been as a programmer, rather than a DBA and so I tend to 'err' towards programming solutions and the majority of times I've delved in with the brief of optimising the database I've managed to come back to poor indexing (or lack of use of them) or poor coding. My thoughts these days are that modern hardware is so powerful that unless something really 'numb' has been carried out in the initial config. go for checks on indexes or client programs. Try identifying the areas of code that go the slowest and pick upon one. Write records to log files in areas of the program that appear to be going the slowest.

Ok, you should be looking at the dbase end as well, but, having worked with a number of Apps. that are badly written this will usually give you much more return on your time.

Sadly, though, you frequently end up giving management the bad news that the Application is need of some serious re-programming. If you do this though make sure you have logs and can identify some poor coding.

I also find that the default parameters that come with a database are usually adequate.

If you do start tuning dbase. parameters then write down what has happened before and after changing a relevant parameter. In most cases it is also advusable to change one parameter at a time.

Just my thoughts.
 

vinod_home

Member
If you have decent hardware and your database was decently configured, then most of the time you would only end up with a max 10% gain going into detailed configuration levels (if you get them all correct)

But if there is bad code (which will not improve by database tuning) and you can find the bad code then your returns like Progress_guy mentioned is very high. I have seen programs that after tuning them would run in 10% of the time (and I guess many people have seen that too). One of the best ways/tools to get that going is to run profiler (not supported by progress ) on the programs that take a long time.

Include a -yx parameter on your client startup profiler to give you proc.mon which will give you the milliseconds each routine would take.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
wow, alot to read

was there a dump & load in recent years ?!

could be significant, especially with type i storage areas
 
Just another few thoughts.

Is it possible for you to get your environment on to a test machine. This would allow you yo do a dump and reload and rebuild your database(s) using different parameters.

If doing this the first steps I'd do would be to create an empty database.

create a valid .st file (see Progress docs.) and IMPORTANT I'd advise setting the blocksize the same as the OS :-

prostrct create dbasename dbasename.st –blocksize 4096

The dbase gurus advise that this is very important (not sure about speed issues).

As Joey advised populate it with dumped data (i.e. a dump and reload) - be careful as the process of dumping the data from the live database will impact on the performance of the live machine.

The chances are, however, that with only one user on and in dev. mode and on a seperate machine if no performance improvements then
it is coding problems, however, you can still play in this dev. environment (and perhaps pick on a few programs as well to optimise or perhaps even look at programs and then add additional indexes to problem tables).

If the performance in the dev. environment is improved try getting a few users to test it (really hard) and possibly create some dummy jobs which simply run in batch mode reading and writing from some of the key tables. Does performance go down.

I've always had the luxury of a spare machine, but, if you haven't then obviously this wouldn't be possible. For the test to be accurate the dev. machine should be of the same spec. in terms of processor(s) and memory, etc. (this is the difficult bit).

As I said when changing parameters always write down and note any difference before and after and try and change one at a time (not always possible as sometimes you need co change another to compensate).

Sorry, I know we are bombarding you with alot of info., but, I think it's safe to say that most developers have been through the same type of scenario and will all have different thoughts on this.
 

markhope

New Member
Echoing an earlier post I've often found that db performance issues and huge amounts of dba work on tuning is caused by either :
- application development issues
- table indexing

Where the application in question is a package you dont have a lot of options for index changes or code optimisation -however- if its an in-house app it would be worth looking for the routines that cause the most load and refine them ( compile listing is a great tool for this - looking at indexes used for each "for each" / "find" ).

eg. working with one client who were considering contract Progress tech to investigate a major performance issue that was having commercial impact. We identified the code that was involved, analysed use of indexes and recommended modifications that eliminated the problem within a matter of hours.

Recommend you balance the DBA tuning effort with application design issues.
 

methyl

Member
Remember that the units of "-B" are database blocks. Post does not state the database blocksize (try promon Database Status enquiry). If you have the usual 8k blocksize, a -B of 300,000 would try (and fail) to maintain 2.2 Gb of shared memory. Very large "-B" values can cause the system to swap. Windows by default takes up to 25% of memory for disc buffering already.
 
Top