Upgrading 9.1E to 10.2B with the least downtime

Blakey43

New Member
Hi,

We have Progress 9.10E on a Windows x64 Server 2003R2 VMware 5.1 machine. We plan to upgrade to Openedge 10.2B which is currently installed on a Windows Server 2008R2 VMware 5.1 machine. The main constraint on this project is that downtime must be as short as possible, because we operate 24/7.

A db backup of 9.10E and restore to the other server will take hours. Then the upgrade process will probably take hours too. Would it be quicker to procopy the db to a shared drive on the new server rather than backup and restore?

We have openedge replication currently running on the 9.10E machine to another very similar machine. Is that of any use in this upgrade scenario? Could a 9.10E database be replicated to a 10.2B db? (Only in my dreams, I guess!)

What is the best approach in this situation? It's essential to get the users back on as soon as possible.

Thanks,
Steve
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
If it is possible to share storage between the two servers (same virtualization host?) then you may have additional options. The key here is that you need an upgraded database due to the change of major version. You can accomplish this either with a proutil -C conv910 on the existing DB or by creating a new DB via prostrct create under 10.2B together with a dump and load. There is less that could go wrong in the latter approach (or, more accurately, there are things that could go wrong in the former that can't in the latter).

One option for server migration is to use after imaging to create the DB on the new machine. Start with a recent backup and then roll forward AI files against that DB until you are ready to make the switch. So at the time you bring old prod down you just need to move over your last AI extent and roll it forward to have a current v9 DB. Then upgrade it to v10, perform any required application updates and business/technical validation steps, and then allow business users to start to connect to the new prod. Just make sure you don't make any destructive changes to the old prod as you need it for fallback. I think common sense should dictate where, when, and how you take backups during this process. It should also dictate at least one dry run to vet your approach and your procedures.

This does mean however that you will be going live on 10.2B with a DB that has a v9 (Type I) structure, which you will need to address in some future maintenance window(s). Moving to Type II storage doesn't have to be done all at once. Hopefully you already have your data in storage areas at present, with no application objects in the schema area, separate areas for tables, indexes, and LOBs, and appropriate RPB settings for each area. I assume you are moving to 64-bit OpenEdge.

Given the timing of this activity I question why you aren't moving to OE 11.x. 10.2B is not scheduled to receive any more service packs. Whatever bugs you have, you have until you upgrade again. So you will face this pain again in a few years to move up to v11.x or v12.x. If you upgrade to the current release now you can future-proof your environment and postpone that next OE upgrade. If you can't get to 11.x, at least make sure you deploy SP08 of 10.2B.

Re OE Replication, it requires exactly the same Progress version on source and target machines. As with the AI approach, you could use this mechanism to keep a target DB current provided that the same release of 9.1E is installed on the target machine with the appropriate product licenses. If you have Replication Plus you can have two target DBs, e.g. one on the DR box and one on the new prod box. With the Replication product you get only one target which should remain as DR (assuming you use it for that purpose).

How large is your DB and what kind of storage subsystem are you dealing with?
 

Blakey43

New Member
Hi Rob,

Thank you for your very helpful reply.

If it is possible to share storage between the two servers (same virtualization host?) then you may have additional options. The key here is that you need an upgraded database due to the change of major version. You can accomplish this either with a proutil -C conv910 on the existing DB or by creating a new DB via prostrct create under 10.2B together with a dump and load. There is less that could go wrong in the latter approach (or, more accurately, there are things that could go wrong in the former that can't in the latter).

Which of the two methods would take the least time?

One option for server migration is to use after imaging to create the DB on the new machine. Start with a recent backup and then roll forward AI files against that DB until you are ready to make the switch. So at the time you bring old prod down you just need to move over your last AI extent and roll it forward to have a current v9 DB. Then upgrade it to v10, perform any required application updates and business/technical validation steps, and then allow business users to start to connect to the new prod. Just make sure you don't make any destructive changes to the old prod as you need it for fallback. I think common sense should dictate where, when, and how you take backups during this process. It should also dictate at least one dry run to vet your approach and your procedures.

That sounds good, because we wouldn't have to wait while we do a backup and restore.

This does mean however that you will be going live on 10.2B with a DB that has a v9 (Type I) structure, which you will need to address in some future maintenance window(s). Moving to Type II storage doesn't have to be done all at once. Hopefully you already have your data in storage areas at present, with no application objects in the schema area, separate areas for tables, indexes, and LOBs, and appropriate RPB settings for each area. I assume you are moving to 64-bit OpenEdge.

The bad news is that the data is not in storage areas, its mostly all in the schema area. To move it would require major downtime (in the order of days), and we've never done that, despite advice that it would speed things up. We're also going to the 32bit version.

Given the timing of this activity I question why you aren't moving to OE 11.x. 10.2B is not scheduled to receive any more service packs. Whatever bugs you have, you have until you upgrade again. So you will face this pain again in a few years to move up to v11.x or v12.x. If you upgrade to the current release now you can future-proof your environment and postpone that next OE upgrade. If you can't get to 11.x, at least make sure you deploy SP08 of 10.2B.

Our application provider is rolling out their latest version on 10.2B, we don't have a choice about that.

Re OE Replication, it requires exactly the same Progress version on source and target machines. As with the AI approach, you could use this mechanism to keep a target DB current provided that the same release of 9.1E is installed on the target machine with the appropriate product licenses. If you have Replication Plus you can have two target DBs, e.g. one on the DR box and one on the new prod box. With the Replication product you get only one target which should remain as DR (assuming you use it for that purpose).

How can I tell if I have Replication Plus? That would be useful, easier than using AI extents manually.

How large is your DB and what kind of storage subsystem are you dealing with?

The system is divided among several DBs. The main one is approx 70GB. We're working on an archiving project at the moment simply to reduce the size of the database so we can upgrade it more quickly.
We're storing the data on an iSCSI SAN with separate LUNS for the old and new servers.

Regards,
Steve
 

cj_brandt

Active Member
Use showcfg to see what products you have licensed. Replication plus is an additional charge.

After you upgrade to 10.2B try to get Service Pack 6 or newer - 8 would be the latest.

after the db is upgraded to 10.2B, there are options to move tables from schema area to a new Type II storage area. For tables under 200mb or so, there is the online tablemove. For larger tables there is raw-transfer and then rename the original and new tables, or a binary dump to get the table data, use the SQL engine to DROP TABLE and then load the table back into a new storage area.

Anyway - lots of performance improvements from 9.x to 10.2B that can make a real difference to the users.
 

Blakey43

New Member
Showcfg revealed:
Product Name: Fathom Repl Plus
Looks like we could use that to build our new database, ready for upgrading by whichever method will be quicker. We should try both methods first. We've tried dump and load before though, and even just one of the bigger tables in the main database took over six hours so the entire system would take days.
Thanks,
Steve
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Showcfg revealed:
Product Name: Fathom Repl Plus
Looks like we could use that to build our new database, ready for upgrading by whichever method will be quicker. We should try both methods first. We've tried dump and load before though, and even just one of the bigger tables in the main database took over six hours so the entire system would take days.
Thanks,
Steve

The key here is how did you dump and load? And was the load done under 10.2B SP08? In some cases there can be an order of magnitude difference between a slow method and a fast one.
 

Cringer

ProgressTalk.com Moderator
Staff member
Is it the dump or the load that is taking the most time? There are a lot of features in 10.2B06 and later to improve the load speeds.

I've no idea if this is a valid strategy, but could you build an upgraded db at your own pace in a new location, and then just apply an incremental backup to it in the downtime?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Which of the two methods would take the least time?
You need to do testing.


That sounds good, because we wouldn't have to wait while we do a backup and restore.
Please be prudent and make sure you always have a fallback position.


The bad news is that the data is not in storage areas, its mostly all in the schema area. To move it would require major downtime (in the order of days), and we've never done that, despite advice that it would speed things up. We're also going to the 32bit version.
It's unfortunate that you're staying with the 32-bit database as that seriously limits the potential performance of your application, particularly with federated databases.


Our application provider is rolling out their latest version on 10.2B, we don't have a choice about that.
Make sure you get (and do your testing with) service pack 8. It has been announced to be the last-ever service pack for 10.2B. I hope your vendor is hard at work on v11.x certification as we speak.


How can I tell if I have Replication Plus? That would be useful, easier than using AI extents manually.




The system is divided among several DBs. The main one is approx 70GB. We're working on an archiving project at the moment simply to reduce the size of the database so we can upgrade it more quickly.
We're storing the data on an iSCSI SAN with separate LUNS for the old and new servers.
If you have shared storage then the use of AI roll forward or Replication Plus is less compelling. Typically I use it to quickly migrate a DB from one box to another with separate storage where it would otherwise require backup/transfer/restore or dump/transfer/load/idxbuild. In your case you just need to get the DB(s) from one LUN to another.
 

TomBascom

Curmudgeon
I am thrilled to see that you're moving off from an ancient, obsolete and unsupported release!

Congratulations!

But I've got to wonder why the target is a release that is at the end of its life? Sure, 10.2B is a squillion times better than 9.x but, at this point in time you really ought to be looking at 11.3. If you can compile code you do have a choice. Regardless of what your application provider is telling you. (And to move from 9 to 10 you must be able to compile code...) About the only thing that could be an issue is if you are deploying a .NET GUI product that needs specific versions on 3rd party components -- but even that shouldn't be a show stopper if the application is properly architected with app servers and such. OTOH you're starting from 9.1e so it seems rather unlikely that such a problem exists.

You keep asking "what is fastest". There is no cut and dry answer to that. If you want the fastest conversion process you have to do some testing with your data and your systems. The details matter quite a lot and everyone has different systems and different priorities. We can offer general advice but, ultimately, you have to test things.

You don't seem to be doing much testing of your process. That worries me.

The actual conversion is very fast. The core steps are:

proutil dbName -C truncate bi
proutil dbName -C conv910

That only takes a couple of minutes. Usually seconds actually.

With a little planning you can have the r-code all recompiled and ready to go well ahead of time (for testing convert a copy of the database and recompile -- keep the r-code that you test with, use that post-conversion).

Or you can cowboy it and recompile post-conversion. But that's just crazy. Especially if you need to keep the window as small as possible.

If you have a SAN that can do snapshots you would make a snapshot between the truncate bi and the conv910 for the sake of safety. Otherwise you either do a dedicated backup or double check that your last backup and the intervening after-image files are up to date and safe (ideally the after-image files have been rolled forward so you have a "verified backup" ready to go). If you have Fathom Replication just make sure that the replication target is up to date and safe. (You will need to "re-seed" replication post-conversion and that will require a new backup & restore of the db -- do not over-write the existing replication target or you won't be able to use it to fall back to your v9 db should that become necessary...)

Once you are converted (32 bit is stupid, change that now) then you can start fixing the storage areas. One at a time or all in one fell swoop. The times you have mentioned for dumping and loading indicate that you're doing a serial, single threaded d&l and likely using the dictionary tool to do it. That is about the only way to make it take so long. For reference Paul Koufalis and I did a little competition at the Ontario PUG last month where we each dumped and loaded a 20GB database using a couple of different methods in around 40 minutes. 70GB should be perfectly feasible in 4 to 6 hours even on crappy hardware.

From the sounds of things you really ought to engage professional help.
 
Top