Remote appserver process times out after ~ 40 minutes

HPM75

New Member
Client OS: Win 2K3
Progress version: 10.1B03
Appserver OS: Win 2K3

After launching a remote process asynchronously the client session hangs as the network connection from the remote appserver session to the client session is broken.
This shows up twice in the appserver server log file

Connection failure for host 1.2.3.4 port 4398 transport TCP. (9407)

How can I keep this connection alive while I wait for the data to be sent back to the client process? The appserver operating mode is state-reset.

Thanks,
Henrik
 

Casper

ProgressTalk.com Moderator
Staff member
wow, I'm stunned you are using a state-reset Appserver, furthermore calls that long on an Appserver will give you potential scalabilty problems in the future....

Back to your question: Most times this has to do with firewall settings. If there is a firewall running on the client or the server then increase the timeout for idle connections.

Imo there is never a good reason to use state-reset appservers, neither is it good practice to use an Appserver for long running programs

Regards,

Casper.
 

GregTomkins

Active Member
I was at a PSC conference a few years ago where a guy did a big presentation on how they had converted their CUI app to AppServer and had made a conscious decision to go State-Aware. You could hear the gasps of dismay from virtually everyone in the room. The thing is, the guy was an excellent presenter and had obviously really thought about this.

The moral of this story is: state-aware (or state-reset) might be appropriate for some people. We are equally big on stateless, though, here at Progress-R-Us central.
 

HPM75

New Member
These appservers are mainly used for running reports in a GUI application. The reason for them being set up as state-reset is unbeknownst to me, I could find out. But then again I might not have much to say in changing that set up in our production environment.

I am running data-pulls from a centralized server where local code initiates connections to multiple remote appservers/databases.
In some cases, depending on db and table size, the process completes in seconds or minutes. When the table is larger and more data needs to be pulled it obviously takes longer.

Disregarding firewall settings (there should be nothing standing in my way) is there anything I can do programmatically or by other means to keep the connection up or should I just fire off the program and not bother with returning the data but have a separate process later go pick it up from disk on the appserver machine?

Thanks
 

GregTomkins

Active Member
We go to great lengths to minimize the length of AppServer calls. We have whole rooms full of people that do nothing but think about this all day.

For long-running reports, we have a whole separate set of processes. Basically, when someone wants to run a report that we think might take a while, they use the AppServer to write a record into a table that describes the request. Then, a separate daemon process that runs as a batch job outside of AppServers reads that record and runs the report.

Besides keeping AppServer calls fast, this also has the benefit of being able to limit the number of "big reports" running concurrently. OTOH, it requires some mechanism to notify the user when the report is ready. Which we also built and works well, but it took quite a while to get it right.
 

HPM75

New Member
Greg,

Interesting. I realize that great updates and enhancements, possibly re-writes of the functionality to the "framework" programs we use are due as more and more data-pull processes are added. We are at most 1-2 people working on this and there is of course never time to analyse the current structure as it's not billable ;-)

Having said that, are you running these batch processes on the database server itself or how do you set this up?
This might be an option we'll need to pursue really soon as our scheduled data-pulls are growing quite rapidly.

Thanks.
 

GregTomkins

Active Member
We run them on the DB box; we have some places where we run AS's on a separate box, but doing the DB access over the network, even when it's 12 inches of T1 cable away, is so monstrously slower, as we have proven over and over, that it's not viable for anything heavy-duty.

We also have a whole entire mass of infrastructure devoted to converting the P4GL ASCII output into PDF, so we can print it on any old Windows PC with minimum hassle. That process is in the process of being moved to secondary servers - another multi-month development effort - to offload the DB box, since it doesn't require any DB access.
 
Greg is giving great advice here. Appservers aren't really designed for long procedure calls. If you did want to use them in this way, you have to deal with TCP/IP timeouts (as you have found) and you'd be worrying about how many agents you need to serve your entire user base.

Most places I've worked use a batch process approach, also as described by Greg. A single call puts a report request record into a table in a "reporting" database. A background batch process scans this table, connects to a target database, sets up a propath, runs the report, stores the output in a LOB field in the reporting database, and sends an email to the report originator via smtpmail.p (optionally with the report output attached). Databases are disconnected, propath restored, audit record written away and we're done.

For heavy-read reports it's best to have them running on the same box as the database using a shared memory connection (no -H or -S).
 

HPM75

New Member
Thanks for your answers. Makes me think I might have a new project on my hands :)
This is not pulling data in the form of reports. The appservers were set up for that purpose but we are using them to pull data written to different ASCII flat file formats for loading in third party application and/or datawarehouses.
We are talking potentially 200+ databases and so setting up a batch process framework initiated from the db servers will take a bit of time. Even though in many cases the databases are hosted on the same server of course.

The asynchronous remote appserver call has served us well for this purpose as we are not sharing this data with other parties than our customers and preferrably want to bring it back to the ETL server where the process originated. I see now that we might need to look at a different approach that is not scheduled from this server but we can still have a process in place that brings it all back to it.

Thanks.
 

tamhas

ProgressTalk.com Sponsor
This also sounds like an environment in which the best solution might be to step back from the AppServer to something like Sonic. Having a series of services to process those transfers would be a natural and the environment would be inherently loosely coupled.
 

GregTomkins

Active Member
Bleh, just another thing to pay for and install and administer and have to check out when things break. It's bad enough navigating the maze of database and client and AppServer interdependencies.

We do use Sonic in a few places, but only when we really, really have to. Which is not to imply we don't like it; we also like 60" plasma TV's, we just don't use them to watch DVD's on airplanes.
 

tamhas

ProgressTalk.com Sponsor
To each their own, but this setup screams loosely coupled to me and a tool like Sonic is the natural solution for that class of problem. Yeah, it costs some money, but so does development time. Roll your own might seem cheaper, but given all the development and care and feeding when it doesn't behave, is it really? Not to mention, of course, that once you get on the bus with the first function, then justifying the second one becomes that much easier and there is so much one can do.
 
Top