Poor Performance with Progress and Siemems Solaris

cmm

New Member
Hello

I have a problem with a customer that uses the application developed by us in Progress.
The system performs extremely slowly even though the hardware power literally strong enough.
This is one of the most powerful configurations we have and they have an extremely poor performance.

The hardware/software configuration is:

The Server:
Fujitsu Siemens PrimePower 400
2 x SPARC 64 500 MHz CPUs
2GB RAM
RAID 7 Disk Storage accessible through 1GB FiberCat 4500
1GB LAN Card
Sun0S 64 bit OS version 8

Progress:
Server – Enterprise DB version 9.1D06
Client – Client Networking version 9.1D06, Actuate Reporting system.
The real size of the database is 2.4GB.
Number of users: 80

The problem is that the application is running very slowly.
He test a select count (*) from a table and the database only cans rich to the 30.000 reads per second with only a user in the machine.
When we increase the number of users, the number of records reads stays the same.
The writes never passed from the 100 records per second.
Another problem is that if we change the configuration (-B, -Ma, etc.) everything seams never to produce nothing in the performance, even we split the data from the indexes but the performance stays the same, the customer says that the performance is the same.

The people from the company who install the network, does a check and they say that the network is fine.

The application is the same we have in another costumer, the same configuration (clients windows and server unix, solaris or linux) the only different is the configuration of the machine.

The fragmentation of the database is ok because we recently made Dump/Load.

Did somebody help me, because this problem started one year ago, and the people from Fujitsu Siemens analyses the machine and they change some configuration but the performance stays the same.
Nobody knows what to do more to solve this problem.
 
To me this looks like a write to disk problem, and is probably related to the general problem with all levels of RAID except RAID 1 (and 5 to an extent!).

All writes happen to one spindle. The parity drive has to have all the writes.

Try replacing this drive with a 15KRPM and see if this lifts the # of writes.

Better yet - replace with an industry standard RAID 1+0 solution that will scale...
 

cmm

New Member
Sorry but I forgot to tell that the Database is in disks (10KRPM) with raid 1+0 and the indexes are split from the data (1 disk Indexes and another to the Data).
The bi is in another disk, the same of the indexes.

We have 3 APW and 1 BIW no AI is implemented.
 
So which is it cmm?

Your first post indicates RAID7 and your last indicates RAID1+0...

I still believe this is an I/O problem and is due to the number of writes per second the bi file can make.

You can try increasing bibufs, -Mf, bi block size and bi cluster size and this may help.
 

cmm

New Member
Sorry for my mistake, what I tried to say is that they have RAID and they have 7 disks, I forgot to say what candy of RAID.
We have bibufs 25, bi block size in 8 K and bi cluster size 16 K.

But like I say before when we change the parameters (and we already change the bi block size, bi cluster size and many others) the performance doesn’t change.
 
Ok- I'm still a little confused - RAID1+0 - 7 drives... I suppose one could be a spare....

I still think that the figure of 100 is sufficiently close to a single spindle write speed that it makes me wonder!

What I would be tempted to do is recreate the arrays as three mirror pairs (RAID 1). Then stripe two of the arrays together to make a single arrays (RAID1+0) and leave the other one alone.

Put the data extents on the RAID1+0 array and the BI file on the RAID1 array.

Your bi cluster size looks small... How frequently does the database checkpoint during normal load - promon - R&D - 3 - 4 and check the length. If it is under 300 then consider raising the cluster size. If it is under 30 then I would say this is a must.

You don't mention other databases.....
 
Top