Question Maximum Database Connections

TomBascom

Curmudgeon
Not having after-imaging enabled is suicidal. Can you afford to throw away a full day of data? (How long between backups?)
 

TomBascom

Curmudgeon
10.2B06 an better also have some very useful improvements to the client/server network protocol that those 39 remote connections might benefit from.
 

TomBascom

Curmudgeon
What does mfgsys.st look like? ("prostrct list mfgsys" to get a fresh copy...)

Let me guess... all your data is in the schema area?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
You'd be a lot better off with one full backup daily and after imaging enabled, with an AI extent switch interval chosen by the business. It's up to them to determine how much data loss is acceptable.

Lock table of 4 million records? That suggests some crappy code got into production at some point and someone asked if the DBA could "temporarily" increase -L until the code can be fixed.

I see the SQL broker is your primary login broker. You have 140 servers; are there really that many SQL clients? If so, you should definitely have a process for refreshing the SQL statistics on a regular basis.

And as for the structure... ouch. Your really need to structure the database properly, with Type II storage. And add AI extents and enable AI while you're at it. Restructuring will involve a dump and load, and it will be a lot faster if you are on the latest service pack of 10.2B. But even if you can't upgrade, you still need to restructure.
 
You have a 64 bit server. You should get 64 bit OpenEdge to go with it.
Sorry, we are running OpenEgde 64 bit.

I thought you said that you have 80 cores?
Yes, we do now. We originally had a 48-core box running our database and our software vendor told us that the server wasn't powerful enough.

Has anyone ever run "update statistics"?
I'm not familiar with this.

You should /start/ by changing it to 16MB. Then see how far apart your checkpoints are. If they are under 1 minute apart during busy periods make it larger.
How do I do this?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
You change the BI cluster size and/or BI block size with proutil <dbname> -C truncate bi -biblocksize <BI block size in KB> -bi <BI cluster size in KB>.

Try 16 MB for BI cluster size (16384 KB) and 16 KB for BI block size.

This is done with the DB offline.
 
You change the BI cluster size and/or BI block size with proutil <dbname> -C truncate bi -biblocksize <BI block size in KB> -bi <BI cluster size in KB>.

Try 16 MB for BI cluster size (16384 KB) and 16 KB for BI block size.

This is done with the DB offline.
Thanks. I will make this change.

I'm thinking of doubling the buffer pool to 1,000,000. Is that significant enough?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
To update SQL statistics, what you will do is use the 4GL to create a SQL script that updates metadata for each application table. Then you will use your SQL client of choice, e.g. sqlexp, to execute the script. This KB article has info on how to do it.

I would suggest slightly altering the code in the KB article, so the COMMIT statement is within the FOR block. Like this:

Code:
output to "updatestatistics.sql".

for each _file no-lock where _tbl-type = "T":
  put unformatted "update table statistics and index statistics and all column statistics for PUB." _file-name ";" skip.
  put unformatted "commit work;".
end.

output close.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Thanks. I will make this change.

I'm thinking of doubling the buffer pool to 1,000,000. Is that significant enough?

That only gives you 4 GB of buffer pool, double what you have now. That's not a very significant change.

As Tom said, I/O gains follow an inverse-square law. If you want to reduce disk I/O by a factor of 2, increase buffer pool by a factor of 4; decrease I/O by 3x, increase -B by 9x, etc.
 
If you have successfully avoided disk IO you might then have a CPU bottleneck. Having lots and lots of cores is not helpful to databases. In fact it is harmful once you eliminate the other bottlenecks. This is known as the "too many cores problem".

How could I tell if this is an issue? Would we be smarter to go with 40 cores instead of hyper-threading to 80?
 
To update SQL statistics, what you will do is use the 4GL to create a SQL script that updates metadata for each application table. Then you will use your SQL client of choice, e.g. sqlexp, to execute the script. This KB article has info on how to do it.

I would suggest slightly altering the code in the KB article, so the COMMIT statement is within the FOR block. Like this:

Code:
output to "updatestatistics.sql".

for each _file no-lock where _tbl-type = "T":
  put unformatted "update table statistics and index statistics and all column statistics for PUB." _file-name ";" skip.
  put unformatted "commit work;".
end.

output close.

Okay, I've created this .sql file. This looks like this needs to run on the database while it is running. Is that true? It doesn't change the indices, just optimizes them, correct?
 

TomBascom

Curmudgeon
You can update statistics online. I usually wait until a quiet time so as not to bother users too much. That generally means at night or on a weekend but you are better positioned to know what is acceptable to your users.
 

TomBascom

Curmudgeon
How could I tell if this is an issue?

Sample the "performance indicators" screen.

You're interested in latch timeouts. If you only have a few dozen per second it's no big deal. A few hundred then it's time to start keeping an eye on it. A thousand -- you probably have a problem.

Would we be smarter to go with 40 cores instead of hyper-threading to 80?

In my experience it doesn't matter much if you hyper-thread or not.

But if you can sell about half of those cores back to the vendor you could probably find better uses for the money.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Okay, I've created this .sql file. This looks like this needs to run on the database while it is running. Is that true? It doesn't change the indices, just optimizes them, correct?

It doesn't change indexes. It populates SQL catalog tables with statistics (metadata) about the application tables, columns, and indexes. The metadata is used by the SQL-92 query optimizer.
 
Top