Answered Temp files in a RAM disk.

ron

Member
At a previous company where I used Solaris I put the client temp files in the /tmp directory (effectively RAM) and that gave a positive performance boost.

I'm now on an AIX system (OE 10.2B on AIX 7.1) and there is a "lot" of spare memory (over 20GB). I have tested creating a RAM disk and putting client temp files in there - and Progress is quite happy with that. We have a mixture of character on gui clients. Character clients put there temp files in /work1/tmp, and the appservers put their temp files in /work1/tmpas.

My basic plan is to create two RAM disks and mount one over each of the directories as mount points.

There are users right around the world and so there is no time at all when no users would be accessing the system - hence there are always open temp files in the two directories.

So ... to my question. If I mount a RAM disk over the existing directories - what will happen to the clients that already have temp files in existence? Will they crash? Does anyone know? I've done considerable searching - but I can't find any references to this particular situation.

Please note: Modifying the parameter files to change to a different directory is possible - but not easy (many instances).

Ron.
 

cj_brandt

Active Member
if a client's temp file is lost / destroyed, their session will end and the server will back out any open transactions they had.

The client's temp file that usually gets the most activity is the DBI files which hold the temp tables. There is a -Bt parameter that will allocate memory to hold the client's DBI files in memory.

Because we use -mmax and -Bt, we have little temp file activity on disk, but every application is different.
 

TomBascom

Curmudgeon
Where at all possible it is better to deploy the memory directly to the buffers that temp-files overflow in to. That makes the "path length" to the data shorter which means less latency with data access (and no context switch for read() & write() operations).

So increase -Bt, -mmax, -l and the ilk first.

A modern OS like AIX also has a very effective file caching subsystem. Many of the old-school techniques for improving performance (such as RAM disks) are ineffective or even counter-productive. If a RAM disk actually makes a positive difference my first reaction would be to wonder why? Some parameter is probably set very wrong.
 

ron

Member
I wasn't aware of the -Bt startup parameter; thank you for that. But I don't think that helps us. We recently upgraded our server - and we have "lots" of memory. If we "could" - we'd increase -B considerably - but we can't because we're stuck with a 32-bit version of OpenEdge (10.2B). We can't move to 64-bit without assistance from our vendor. We continue to apply pressure - but that situation is unlikely to change anytime soon. So - I was looking for some other ways to temporarily make use of all the memory we have. Yes - AIX is very good with disk caching. We are about to test to see if a RAM disk does or does not imact performance for jobs that make heavy use of temp tables.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Why would your vendor want to prevent you from swapping your 32-bit RDBMS and client licenses for 64-bit? It's no charge and it shouldn't have any application-compatibility implications. You would just install the new licenses and recompile. Obviously you would do this first on a test box to ensure there are no unanticipated issues.

What is their opposition? Make them give you a technical reason.

Also, do you know that your clients won't benefit from a change in their startup parameters? Are they doing temp file I/O? If so then tuning could help to reduce it. On the other hand if they are not doing temp file I/O then why were you looking at RAM disks?
 

TomBascom

Curmudgeon
There are only three ways that I can think of that a vendor could stop them:

1) would be to only distribute r-code.
2) require the use of a 32 bit external shared lib.
3) bigtime FUD in the executive suite.
 

ron

Member
Yes - there is quite a lot of temp file activity.

The "issue" about changing to 64-bit (which we physically have) is that our management will not allow the change unless the vendor has given an assurance that they have fully tested their application under 64-bit. Why doesn't the vendor provide the requested assurances? Because they appear to have other higher priority issues to deal with. A second reason is that they have lost many of their good technical people! (A very difficult situation!)

That will no doubt seem like a pretty silly reason - but the main point is that the issue is entirely out of my control. It will eventually happen - but I don't expect that is will happen for quite a while.

Technically, of course, I don't have any doubt at all that if we just went ahead and changed the version of Progress - everything would "jst work".

(The vendor provides all source code - encrypted.)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
So you have source and you have 64-bit binaries? From a technical perspective, you can probably run 64-bit. Now it sounds more like your own management is the roadblock, rather than your vendor.

You can at least build a 64-bit version of the application on a test box as a proof of concept.

That aside, with "quite a lot of temp file activity" you should look at trying to eliminate some of it if you can. You can do that today on 32-bit.
 

cj_brandt

Active Member
I don't understand why you don't think the -Bt would help you if you have problems with temp file activity. It doesn't matter if you are using 32 or 64 bit, you can allocate -Bt buffers.

A client parameter "-t" will allow you to view the temp files in the current working directory or where you have -T pointing.
If you have large DBI files, then -Bt can help you.
 
Top