drewser001
New Member
Hi all,
I have been working very hard in an attempt to migrate a production Progress 9.1D database with 600+ users from Ultra320 SCSI disks to newer solid state SAS drives, but have been running into performance problems.
The test server has 12GB DDR2 ram, Two dual core (or single core w/Hyper threading)Xeon cpus
The DB & BI are running on the same RAID 1 pair of spindles (Ultra320 SCSI, 15K drives)
The DB parameters are:
db buffer blocks = 190000
num lock table entries 707778
Spin Lock Retries = 50000
maximum beffers = 25
Servers = 207
message buffer size = 4096
BI parameters are:
buffers = 25
bi threshold = 3
delay writes = 3
cluster age = 60
BI block size = 256K
BI size = 25GB
The windows partition is set with the default 4K allocation unit, and the RAID 1 set has a 64K stripe size.
So far, the SCSI disks are outperforming the SSDs, but not by much. I expect better performance from these drives than what I am getting. The SSDs are Enterprise class drives and have been benchmark tested on the server. Benchmarks for the SSDs blow the SCSI disks out of the water, but running the app/db on them tells a different story.
What can I do to this setup to increase performance (other than separate the DB and BI onto different RAID sets, which has been tried, to some degree) ??
With a 256K block size for the BI file, would it be better to initialize a separate RAID 1 with a larger stripe size and setup the windows partition with a larger allocation unit size as well? Does this help performance at all?
OR, would I be better off with a smaller BI block size, since the SSDs are designed for smaller random read/write sizes (4k and 8k), where I don't suffer from Spindle/Seek latency?
I have been working very hard in an attempt to migrate a production Progress 9.1D database with 600+ users from Ultra320 SCSI disks to newer solid state SAS drives, but have been running into performance problems.
The test server has 12GB DDR2 ram, Two dual core (or single core w/Hyper threading)Xeon cpus
The DB & BI are running on the same RAID 1 pair of spindles (Ultra320 SCSI, 15K drives)
The DB parameters are:
db buffer blocks = 190000
num lock table entries 707778
Spin Lock Retries = 50000
maximum beffers = 25
Servers = 207
message buffer size = 4096
BI parameters are:
buffers = 25
bi threshold = 3
delay writes = 3
cluster age = 60
BI block size = 256K
BI size = 25GB
The windows partition is set with the default 4K allocation unit, and the RAID 1 set has a 64K stripe size.
So far, the SCSI disks are outperforming the SSDs, but not by much. I expect better performance from these drives than what I am getting. The SSDs are Enterprise class drives and have been benchmark tested on the server. Benchmarks for the SSDs blow the SCSI disks out of the water, but running the app/db on them tells a different story.
What can I do to this setup to increase performance (other than separate the DB and BI onto different RAID sets, which has been tried, to some degree) ??
With a 256K block size for the BI file, would it be better to initialize a separate RAID 1 with a larger stripe size and setup the windows partition with a larger allocation unit size as well? Does this help performance at all?
OR, would I be better off with a smaller BI block size, since the SSDs are designed for smaller random read/write sizes (4k and 8k), where I don't suffer from Spindle/Seek latency?