Error DB extent has reached the 80% threshold for block usage

TomBascom

Curmudgeon
I ran into a database last weekend that had everything in a single large .d1 file (yes, the schema area) of 46GB.

Their configuration is now somewhat different ;)
 

Cringer

ProgressTalk.com Moderator
Staff member
Pretty certain addonline isn't in V9, but it's a vague recollection more than anything.
 

NewDBA

New Member
Thanks for your replies.

This is indeed a monster transaction as approx 15,000,000 records are being moved from old DB to this new one.

I checked the DB creation log properly and i saw message 9453: Large database file access is already enabled for database.

So this means that if needed i can increase the BI file size limit beyond 2 GB using "proquiet"?
Any consequences of doing it?
 

Cringer

ProgressTalk.com Moderator
Staff member
The code really should not be making that one transaction. You'll likely be hitting lock table limits and who knows what else, and blowing your BI unnecessarily. The code should be written to write one record at a time within a transaction.
Why would you want to increase the BI file size limit if you've got a variable extent? Just leave it to grow, and then when you truncate the BI afterwards it'll drop back down.
 

NewDBA

New Member
The reason for increasing size is that db startup parameter has -bistall.

DB stalled at 750 MB. I increased it to 1.5 G and then to 1.9 G.

The transaction will be started in am hour and the bi size may grow beyond 1.9 G.

So i wanted to be sure if i can increase its size limit to 2.5 or 3G so that transaction could complete and if there is any consequences of doing that.
 

TomBascom

Curmudgeon
"The transaction will be..." so it is not too late to fix it?

You really should fix it. Letting it try to do it as a monster transaction will almost certainly lead to agony. You will probably spend hours and hours grinding away and then have the whole thing blow up due to some oddball error or unanticipated issue -- like blowing up the lock table.

And when it blows up you need to go through crash recovery and back out the partially committed transaction. Which will take even longer. And even more disk space.
 

TomBascom

Curmudgeon
The code should be written to write one record at a time within a transaction.

When CREATEing or UPDATEing large numbers of records in a batch environment committing in modestly sized groups, 100 to 1,000 records is actually much faster.

DELETE operations are fastest if they are one record at a time and single threaded.

Small "atomic" commits of CREATEs and UPDATEs are the way to go when writing code that involves UI. But even then the bigger issue is making sure that there is no user-interface activity within the scope of the commit.
 

TheMadDBA

Active Member
Agree with Tom on both of his last posts.

Also you need to start finding out why your BI is growing that fast, something is rotten in Denmark.
 
Top