Optimising client connection over remote network

D.Cook

Member
Our Progress client application is pretty data-intensive, and therefore network-intensive. Generally this isn't a big issue for local networks, but when clients want to deploy the application over a remote network, they sometimes run into serious performance problems.
I'm talking about capable fibre links that are reported to be only a small percentage used. So I wonder if anybody else has had to deal with this, and if they have any tips for diagnosing the bottle-neck.

Is Progress network traffic different to other types (like http, smb, etc..)?
Where should a network administrator be looking to determine the bottle-neck?
And are there any recommendations on client startup parameters for this kind of environment?

I'm aware that we should be trying to 'thin' down our 'fat' clients, that is something we are working on in the long term..
 

D.Cook

Member
Turns out to be a completely separate issue -- thanks users for telling me about the error messages they were receiving..

Nevertheless, any tips are greatly appreciated.

In case anyone's looking a few client parameters I've found may help, though I wouldn't expect big changes:
-l (Local buffer size)
-mmax (Maximum memory for r-code) - might be relevant if r-code is stored on network file share
-Mm (Message buffer size)
-cache (Specify a cache file that holds the database schema)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I'm not an expert on this as only some of my clients connect their fat clients across a WAN but I'll throw in my $0.02.

Look at the quick request (-q) client session parameter. This will mean that you only search the propath on the first use of a procedure, so it cuts down on chat between client and server (assuming your code is server-side). Note that it can have implications for your code delivery, depending on how and when this happens. Clients will have to restart to pick up new code. Even on a LAN, this is a good one to use.

The -cache client connection parameter causes the client to access DB schema from a local copy of the schema rather than talking to the DB. You have to build this file before using this param, and update it when your schema changes. I haven't used it personally but the docs say it can help with a WAN topology.

There is a message compression client session parameter (-mc). Again, I haven't played with it, but it could help. Obviously, test it first in your environment to see if the cost of compression overhead is offset by the throughput you gain.

You can change the default message buffer size for remote clients with the -Mm parameter. Unlike most, it has to be used on both the client and the server in order to be used. And if you set it for the database, all of its clients have to use it. Again, try it out. It might reduce the number of round trips for queries.
 

D.Cook

Member
Thanks Rob, I haven't come across the -q and -mc parameters before, they look useful.
The majority of our code is stored on the last of multiple propath entries on a network location, so -q should be useful in our situation.

I've experimented with -cache in the past, but there doesn't seem to be any real advantage.
 

TomBascom

Curmudgeon
-cache mostly only makes a difference in the startup phase. (And it may not have much impact on decently modern releases and hardware.)

-Mm is probably the most significant possibility for a "fat client". It will collect multiple NO-LOCK records into a larger unit for transfer to the client. This can have a very noticeable impact on that sort of thing. Try -Mm 8192 for starters.

-mc currently only applies to certain app server connections. Although I think I heard that it is being expanded to cover more connection types in OE11.

Keep in mind that traditional fat clients pool connections at the server via the -Mn, -Mpb, -Ma and -Mi parameters. There can be significant benefits to thinking about how best to spread the load among your "remote servers". I recommend that you err on the side of a large number of servers with a fairly small number of connections per server (2 or 3 at the most). -Mi 1 will "round robin" connections among the servers. If you can establish sensible subsets of users consider creating multiple login brokers -- you might, for instance, split the accounting people away from the warehouse people if accounting tends to run lots of large inquiries while warehouse does lots of short bursts.

Shorten the PROPATH and, if possible, re-order it to put the most used stuff up front. Don't put the kitchen sink in there "just in case". Think it through and have a reason for each element and its order. An amazing amount of time goes into searching poorly organized paths (the same applies at the OS level).

Organize your r-code into prolibs.

On the code side make sure that you are using NO-LOCK when possible and not defaulting to SHARE-LOCK. Explicitly set EXCLUSIVE-LOCK when needed so that you avoid needless lock upgrade requests going over the wire. Look into using field lists if it makes sense for your application.

That can all be done without a major re-architecting of the system.
 
Top