bkioWrite:Insufficient disk space during write

ADBA

New Member
Does anyone know what can cause this error:

16384, offset 153552, file /allp/db/data00/allpedi.d11. (9450)
22:50:24 Usr 9: bkioWrite:Insufficient disk space during write, fd 448, len
16384, offset 153552, file 22:50:24 APW 6: Stopped. (2520)
22:50:24 BROKER 0: Begin ABNORMAL shutdown code 2 (2249)
22:50:27 WDOG 7: Stopped. (2520)
22:50:28 BROKER : Removed shared memory with segment_id: 65538
22:50:28 BROKER : Multi-user session end. (334)

allpedi.d11 - is a variable extent, but it's only at 600mb. This is a multivolume datbase on Linux. Supports files up to 2GB.
 
vinod_home said:
Something should have hit the 2GB limit.
Most likely, but not neccessarily.

Error 9450 can occur for various reasons, including the above and :

Running 3rd party processes against the db files (eg. backup utils/antivirus/defrag)
Intermittent hardware problems
Lack of OS resources

There are several entries for this error in the knowledgebase.
If you do not get an answer here, try dba@peg.com
 

ADBA

New Member
I did not see anything in knowledege base that is relevant to this error other then hitting 2gb limit. I know this is not the case here as the file is well below the limit.

thank you
 

methyl

Member
In the error message "offset" would be 2147467264 (i.e. fractionally less than 2 Gb) if you had hit 2 Gb limit. The file descriptor "fd 448" seems very high - but I don't know what is normal on Linux.

A suggestion:
Progress opens every file in the database structure for every local user. Count the number of lines in the structure file (<dbname>.st) for every database which will be open by a typical user session. Add an arbitary 10 to the result to allow for shell etc.. Multiply this number by the number of users.
Compare with the with kernel limit for maximum open files.
 

ADBA

New Member
"The file descriptor "fd 448" seems very high" - but what does it mean, hight for what? what is "448"

I'll try the suggestion, but I'm still pretty new to Progress and Linux, so will see.

One thing that I found to strange is db.st file, which has no data area extents, but has multiple schema area extents:

d "Schema Area":6,32 /allp/db06/allpedi.d1 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d2 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d3 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d4 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d5 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d6 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d7 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d8 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d9 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d10 f 524288
d "Schema Area":6,32 /allp/db/data00/allpedi.d11

allpedi.d11 - is the one referenced in the error.
could this cause a problem or is this not a big deal.


thank you
 

methyl

Member
Needs a DBA with detailed knowledge of Areas.

Please do check disc space in the disc partition containing allpedi.d11:
cd /allp/db/data00/
df -k . # i.e. df<space><hyphen>k<space><period>
 
I recall from my very limited knowledge of Unix that the file space can be set on a per user basis (ULIMIT), so presumably you need to check that for the user/process accessing the file that is causing the problem, not just check at the OS level.

As has been suggested, you may need to talk to a DBA with more experience in this particular area, or try Progress tech support, or use the senior Progress forum I directed you to earlier:

http://www.peg.com/lists/dba/web/
 
temp work area

bkioWrite:Insufficient disk space during write

I had something like this and it was due to the client NOT setting the temp directory of the session with the -T parameter. There wasn't enough space in the local area for the DB sort files. I had the client use the standard setting for the -T flag and the problem went away. The standard setting is a particular dir used for DB sorting/working activity.
 

ADBA

New Member
Lee, thank you for your suggestion, but I don't think file size limit is a problem here. I cheked user limits and it's set to over 2GB.

Bruce, I think what you are saying about temp space of the client session is what may be causing this. Can you please be more specific about setting temp directory and -T parameter. How do I check the current setting and where do I change it. Can you please point me to the right place or at least tell me where to find more info on this and what to search for.

Thank you
 
-T parmater

sure.
Generally, when a client session starts up it uses a *.pf file. The *.pf file contains the connection parameters of the client session. You see some thing like, on an SX.e system,
$DLC/bin/mpro -pf /rd/opsys/client.pf -p li.p
in a script that is run from the clients .profile file in their home directory.
One of the client.pf parameters is normally the -T flag and tells that session where to put Progress sort files. Otherwise, if everything is started by hand you have a statment like
$DLC/bin/mpro -db /someplace/database -s 100 -mmax 1200 -p programname

and it is missing the -T parameter. Either on the command line or in the *.pf file you have to add the -T parameter . The syntax is "-T /large/filesystem/dir" . This tells Progress to use that file system for sorting purposes. If the user runs a really large query it could need lots of space; if it doesn't have it is gets that error.

hope that helps.
 

TomBascom

Curmudgeon
A lot of times people don't realize how much space -T files are taking up because temp files are, by default, stored "unlinked". That means that they are invisible -- they don't show in an "ls" (although they are counted by "df"). You can make them visible by adding -t (lower case).

Best practice is to specify a temp directory that is distinct from the database directory with -T and to make the files visible with -t like so:

mpro dbname -T /protemp -t

You'll need to occasionally clean up stale temp files though -- the main advantage to the default setting is that temp files from sessions that end abnormally are automatically cleaned up -- which is why a "df" after a shutdown shows lots more free space than one while the db is running (giving you the impression that you have always had lots of free space if you didn't happen to run df just before the problem.)
 

TomBascom

Curmudgeon
ADBA said:
"The file descriptor "fd 448" seems very high" - but what does it mean, hight for what? what is "448"
It's the 448th open file handle for that process. Which is a lot. How many databases is this session connected to? If the .st file below is from the only connected db you ought to be seeing a file descriptor a lot closer to 11 -- maybe as high as 20 but not 448.

One thing that I found to strange is db.st file, which has no data area extents, but has multiple schema area extents:

d "Schema Area":6,32 /allp/db06/allpedi.d1 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d2 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d3 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d4 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d5 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d6 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d7 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d8 f 524288
d "Schema Area":6,32 /allp/db06/allpedi.d9 f 524288
d "Schema Area":6,32 /allp/db07/allpedi.d10 f 524288
d "Schema Area":6,32 /allp/db/data00/allpedi.d11

allpedi.d11 - is the one referenced in the error.
could this cause a problem or is this not a big deal.

allpedi.d11 is a variable extent. So it makes sense that you could run out of space while growing it. The 16k size of the write operation and the indicated address is also consistent with growing an extent. 2GB limits have nothing to do with this error. 3rd party tools might but it is Linux so the usual suspects (virus scanners and windows backup tools) aren't very likely. The -T theory makes as much sense as anything right now. You might also try looking in the system logs to see if anything peculiar shows up...

BTW -- the schema area should not be managed in this way. It should only hold the schema. Additional areas should be created for the data and indexes. But that is another topic and not actually related to your current problem.
 
Top