Migrate from Physical (NFS) to Virtual (Hyper-Converged) storage

PresidioKid

New Member
I am running a progress openedge ERP DB on a stand alone server connected to a NetApp array using NFS mounts and I want to refresh my storage and compute to a next generation all flash Simplivity system. Does anyone have links to P2V documentation or how to migrate away from NFS mount points? Thanks in advance -J
 

TomBascom

Curmudgeon
I take it that you stopped reading?

Any external device is *very* different from a local server with internal SSD. You cannot fix "external" by stuffing a cabinet full with flash. (It also doesn't fix "shared".)
 

TomBascom

Curmudgeon
In any event... migration is simple enough. You just backup and restore.

Or copy the files and run "prostrct repair" although that is a bit less desireable.
 

TomBascom

Curmudgeon
The *best* way is not to.

The *worst* way is to go with a cookie cutter approach where all VMs are using the default parameters. You can make it even worse if you over provision the host. It is an especially bad idea to allocate too many cores to the VM.

If you want to do it correctly you need to "right size" the VM which likely requires testing and non-standard configuration choices.
 

TomBascom

Curmudgeon
If performance is your goal then your *best* solution is to go with no VM and all internal SSD storage.

If your goal is something else then VMs and external storage might make sense.

But do not fool yourself -- "shared" and "external" have non-zero costs.
 

TomBascom

Curmudgeon
FWIW -- many customers whose primary goal is not "the very best performance possible" are reasonably successful with virtualization and external storage. But there are good reasons why "shared", "external", "filers", virtual", "hyper converged", (and RAID5 or RAID6 and their many variations) are red flags.

If your system is modestly sized and your users are not very demanding then you can probably get away with quite a lot.

A few things that frequently go wrong:

1) believing that databases can be treated like documents or spreadsheets
- especially with regards to snapshots
- or backups
- or virus scanning

2) believing that a running database can be safely "v-motioned" (it will probably work when you test it, it will fail when you need it and very possibly corrupt your data doing it under load)

3) believing that default options are best - the defaults for these technologies tend to be oriented to independent sequential access of many of small files, databases do concurrent random access to large files. (the specifics vary from platform to platform and release to release.)

4) "more is better", especially related to number of cores. more is NOT better. fewer and *faster* cores are what you want for database applications.

5) magic creation of capacity out of thin air -- if your vm does not have any spare resources it is overcommitted and running slow, it is not magically doing 150% of its non-virtual workload. server consolidation works fine when individual servers are underutilizing their physical resources and when the consolidated server has enough capacity. but a db server needs to have capacity for its *peak* workload available -- you cannot just average the workloads over 24 hours (or whatever)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I can only concur with everything Tom has said. If you search the Progress Community archives for "Exchange sessions" or search through the past presentations from the PUG Challenge Americas conferences, you should find a few on OpenEdge and virtualization. Some basics: don't "thin-provision" RAM or vCPUs; use the vmxnet3 NIC adapter rather than the e1000; and don't ever snapshot or vMotion a VM with a running database; you could corrupt it by doing so.

If you really need to do either of those, the safe way to do it is to first (a) quiesce the database with the proquiet enable command and (b) confirm that the quiet point has been enabled by checking either the database log (dbname.lg) for an entry saying the quiet point has been enabled, or look for such a message on screen 5 in promon. The reason this check is important is that it might take a few seconds for the quiet point to take effect, but the proquiet command always returns immediately. So you can't assume that it has started just because the command completed. Once the snapshot or vMotion is done, you can resume forward processing with proquiet disable. Note that all transaction processing stops between the enable and the disable, though reads are still possible.

I also suggest you find a way to benchmark your buffered and unbuffered I/O rates on the existing system and on the new one, outside of OpenEdge. I've heard lots of stories (and encountered a few first-hand) where a customer moves to a new system and finds performance to be sub-optimal, only to have the vendor say something like "well Progress is slow; it can't keep up" which is poppycock. Then people waste lots of time trying to prove to the vendor that the database runs just fine, when the burden of proof shouldn't be there in the first place. Benchmarks never tell the whole story but they can be a useful early indicator when something is drastically wrong with the performance of some component or connection.

One other thing: if you're using the after-image file management daemon, don't point its AI archive directory at a network path like an NFS or samba share. I've seen those go stale and cause the daemon to block, and the only way to recover was to restart the database. Use a local directory for AI archive and then copy the archived AI files over to the DR system with some kind of scheduled job (e.g. an rsync script run from cron).
 

PresidioKid

New Member
Is there a procedure I can read that will teach me how to move/copy files from an NFS mount to a directory on the local VM and redirect the applicaiton/database to the new landing zone?
 

PresidioKid

New Member
Also I am attempting to locate a current compatibility matrix for this software. For example is it supported on VMware ESXi 6.7 but haven't been able to locate this. Any assistance would be greatly appreciated.
 

TomBascom

Curmudgeon
Is there a procedure I can read that will teach me how to move/copy files from an NFS mount to a directory on the local VM

That's a fairly open ended question. If it were me I would use "scp". Possibly in conjunction with "tar". But I would also have Linux running on both sides and I have a feeling that you are running Windows.

Of course this is also all starting with the db on an NFS share so it might be even simpler to just (temporarily)) mount that share on the target server.

... and redirect the applicaiton/database to the new landing zone?

DO NOT JUST MAKE OS COPIES OF A LIVE DATABASE. They will be corrupted.

If you are going to copy files using os commands first shutdown the db. You can then safely copy the files but you must get all of the pieces (and they might be spread around) and you must preserve the timestamps.

The preferred method is to use probkup and prorest. These tools know all about what it takes to make a proper copy of a Progress database. 3rd party tools usually have no idea what a Progress database is and even less of an idea how to properly back it up.

To make a probkup:
proenv> probkup online dbname dbname.pbk -com -Bp 10

(Omit "online" if the db has been shutdown.)

This will create a single dbname.pbk file. It might be quite large if your database is large. There are ways to break it into multiple files but I will assume that one file is easier to work with and more in line with your needs right now.

If you can preserve the path names then you should just be able to restore a backup of the database to the same path that you backed it up from. That would be the safest and easiest solution.

proenv> prorest dbname dbname.pbk

(This pre-supposes that Progress has been installed in the local VM.)

If you feel that you must change the path then you can still do that by restoring a backup.

If the "structure" of the db is complex -- IOW different parts are on different paths, then you will need a dbname.st (structure file) that describes those paths. You can get a copy of the current structure file by running this command on the current server:

proenv> prostrct list dbname

That will create dbname.st. Copy that to the new server and then edit it on the new server.

The procedure above moves the db files. You will need to adapt whatever your current management procedures (scripts, config files, whatever...) to start, stop, backup and generally manage the backend of the db. There is no "one size fits all" answer to that. It is usually very highly specific to the application. Likewise the client side of the application will need to have it's connection configuration redirected if you change anything like the server name or IP address. If you keep the server name the same and just use DNS to point it to the new server and you do not change the connection port numbers then maybe you get lucky and the application never notices anything.

Obviously you would do all of this with a test system before you tried to do it with production.
 

TomBascom

Curmudgeon
Also -- you should tell us a few basic things like: what version of Progress are you running? What OS are you running it on? Are you running a commercial application? If so which one? (There are thousands but maybe somebody has worked with whatever you have...)
 
Top