Where can I find recent documentation on OpenEdge and linux clustering?

jurriaan

New Member
Since our windows (2003 server) cluster isn't really a satisfying experience, I'm thinking about
updating to linux. Are there any recent documents about what clustering software and settings
to use and what pitfalls to avoid? Searching this forum or the internet doesn't really help, and
Progress seems to keep the good information hidden very deep - I haven't found anything except
'don't use GFS/GFS2'.
 

jurriaan

New Member
that text:

Linux:

There are no committed plans for Fathom/OpenEdge Clusters to support any Linux Cluster Manager in OpenEdge 10.x. As there are many different Linux Cluster Managers available, we are in the process of information collection from customers as to the specific version that they are interested in. Please contact Technical Support with the Linux Cluster Manager the customer is considering using.


is quoted in a post on this forum dated December, 2009, under the heading

ID: P122368
Title: "What cluster managers does OpenEdge support?"
Created: 02/20/2007 Last Modified: 05/20/2009

I find it really hard to believe that nothing has changed in how to setup Openedge in a linux cluster since 2007 or 2009. I'd like to hear someone tell me with a straight face 'we have been in the process of information collection from our customers for at least 3 years now, but so far we have nothing to say'.

Seriously now - is that it? That'd be very, very disappointing.
 

TomBascom

Curmudgeon
I can confirm, with a straight face, that nothing has changed -- at least from the outside looking in.

I don't know, but I consider it likely, that very few customers (if any) have provided any input.

"procluster" is just a script that supports fail-over clustering. There is very little to it and it is hardly worth bothering to identify as a feature. IMHO it is more trouble to than it is worth.
 

jurriaan

New Member
[heavy sarcasm]
Aha. Perhaps I hadn't realized that the process of information collection really meant waiting for customers to bring the information to them.
Sounds more like the process of information awaiting...
[/heavy sarcasm]

So what do I say to management?

Well, linux clustering is supported, here's what we have to get/do?

Or something like

It might work. Give me 6 months to test everything. No, there's no documentation on what to do, sorry.

I'm no salesperson, but I think a slight difference in customer reactions can be predicted.
 

RealHeavyDude

Well-Known Member
You need to roll you own scripts to shut down the database(s) on the primary node (when it is still available of course) and start them on the other node. AFAIK the cluster managers are providing slots to include these scripts as Progress is not the only software product that is not cluster aware. That is what I would do anyway so that I am able to control what happens when without having to rely on some obscure automatism. But that's just me - call me a control freak if you want to ...

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
I don't exactly disagree -- but the KB that you quoted does say "Please contact Technical Support with..."

Do you have a specific Linux clustering product that you would like to see supported? Have you let TS know?

It also says:
In General:

The Progress RDBMS is not a cluster-aware application but can be configured to run in a shared disk configuration. Most clustering failover solutions rely on the shared disk configuration if failover on database-centric applications such as Progress is required. If that shared disk environment is managed and controlled by a cluster manager, there is no native integration between Progress itself and the cluster manager. The administration and configuration tasks needed to ensure failover of a Progress database within a clustered environment are the responsibility of the administrator for the cluster management software.

Which you might want to share with management too.

As for documentation: http://documentation.progress.com/o.../wwhelp.htm#href=dmadm/32dmadmch24.33.07.html
 
Top