purge and resize db

parttime Admin

New Member
I would prefer to
  • D&L, rebuild, do whatever" on a recent copy of the db on server B (no downtime concern), while doing uninterrupted business with original db on server A.
  • Then "synchronize" the newly refurbed instance on B "somehow" with all data that has been accumulated in the meantime on A
  • Stop both, copy B to A, restart
My plans regarding step 2 with progress means have been dismissed, so i guess i have to talk to my manufacturer to have him create some application-based synchronization workflow for maintenance scenarios like these. Think of denormalized ex/import or similar. They have hundreds of sites all in the same field of business (24/7) so this would be of universal use.
The PUG method looks pretty cool, but is not nowhere near my skills.
 

TomBascom

Curmudgeon
no,
no ABL compiler,
That's too bad. Is the vendor at least still providing support and bug fixes and willing to move you to a modern OE release?

To use the horizontal table partitioning approach someone will need to write some code and either compile it onsite or provide r-code. If you have an onsite compiler license it would be easier because you could use a fairly generic procedures with static table names. But that's all just idle speculation...

no pw for writing, read only PW for ODBC

What is "pw"? Password? I'm not sure that I follow what you are saying here.
 

TomBascom

Curmudgeon
I would prefer to
  • D&L, rebuild, do whatever" on a recent copy of the db on server B (no downtime concern), while doing uninterrupted business with original db on server A.
  • Then "synchronize" the newly refurbed instance on B "somehow" with all data that has been accumulated in the meantime on A
  • Stop both, copy B to A, restart
My plans regarding step 2 with progress means have been dismissed, so i guess i have to talk to my manufacturer to have him create some application-based synchronization workflow for maintenance scenarios like these. Think of denormalized ex/import or similar. They have hundreds of sites all in the same field of business (24/7) so this would be of universal use.
The PUG method looks pretty cool, but is not nowhere near my skills.

It sounds like you have a fairly complex setup. You might want to engage an experienced consultant rather than try to work it out in an online forum ;)

Incidentally, and apropos of nothing - I'm going to be in Europe for the next couple of weeks.
 

parttime Admin

New Member
sorry i mean we do have read-only direct access to the db (circumventing the application).
we use it for odbc connections but we have no means of writing to the db except through the actual application and it's dedicated processes.
All of them come with a license fee.
So it's on the one hand side a safety measure as no user can accidentally f* up the content, and on the other hand side it's protecting the commercial interest of the manufacturer.
 

parttime Admin

New Member
This discussion was very insightful so far, i'm armed with arguments for the upcoming discussion with the manufacturer.
As we pay him, we will demand him to provide a solution. I'll probably will invest in a few hours of progress bootcamp follow up.
Had 15 hours last year and made substantial "progress".

Thank you again and happy new year btw
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Here is a little more material for your upcoming discussion. OpenEdge 11.7 will be retired in a little over a year from now, in April 2025.
https://docs.progress.com/bundle/openedge-life-cycle/page/OpenEdge-Life-Cycle.html

Progress has moved OpenEdge to a long-term support (LTS) model, where some releases are supported for a minimum of eight years (five in Active phase plus three in Sunset phase) before retirement, whereas others are active for only about six months.
https://docs.progress.com/bundle/OpenEdge-Product-Lifecycle/resource/OpenEdge-Product-Lifecycle.pdf

Releases 12.0, 12.1, and 12.3 to 12.7 are non-LTS releases. 12.7 is currently the active release, though 12.8 has reached the release-candidate stage and is due imminently. 12.2 and 12.8 are LTS releases. It is expected that 12.8 will be the last release before the 13.x releases.

I say all this because at this point anyone who is still using OpenEdge 11.7 or earlier should be actively planning their project to upgrade to release 12.8, if they intend to avoid using a retired release that will not be maintained and will have limited support.
 

Cringer

ProgressTalk.com Moderator
Staff member
Hello,
  • Old Version was 10.2B (sorry)
  • Daily full backup takes 90 min (online) WIN19,130GB RAM, 12*2 cores at 3 Mhz , VMWare7.1 (no system expert though)
  • I keep an "archive instance" of db and Application on a dedicated server with all data from day one to end of 2023.
  • However data retention could become a potential problem, being in the EU and handling personal information.
    That's not my main concern at the moment though, but an argument nevertheless (no clear legal policy right now in my country)
  • Concerning my nerves, let me put it this way
    "...This should speed up backup, ...TRAINING RUNS and therefore skill-building and thus save my nerves in the long run".
  • 12 hrs downtime is not possible. Like really not.

    Would deleting and "emptying data blocks" at least free space for future growth? Better than nothing?
    System is ever growing anyhow, as more recent data is stored than the amount of historic data, that could safely be deleted, because our business is growing.
Just a thought - you're on Windows and the backup takes 90 minutes. That seems a little long tbh. When you do the backup are you using the same filename each time, and are you backing up over the old backup file? If so, consider moving the old backup to the side, or using a unique file naming convention. There is a Windows "feature" where writing the backup over the top of an existing one is much slower than writing to a new file.
This won't solve the database size issue, but it may help with the backups.
 

parttime Admin

New Member
Just a thought - you're on Windows and the backup takes 90 minutes. That seems a little long tbh. When you do the backup are you using the same filename each time, and are you backing up over the old backup file? If so, consider moving the old backup to the side, or using a unique file naming convention. There is a Windows "feature" where writing the backup over the top of an existing one is much slower than writing to a new file.
This won't solve the database size issue, but it may help with the backups.
Yes I do so and i will consider...
Great hint, thank you
 

TomBascom

Curmudgeon
Aside from the Windows performance considerations... a more fundamental reason to not overwrite the old backup is that, as soon as you start the new backup the old one that you are overwriting is unusable. And if something goes wrong before the new backup completes you are without a valid backup. That could get unpleasant.
 

parttime Admin

New Member
hmm
the 90 min were wrong (wishful thinking), usual duration is 160 min :(
so i started a backup after removing the last one, but so far it won't make a big difference
around 20 k Blocks/10sec @ 176 M Blocks total -> 135 min
1704202249122.png

Disk transfer rate is oscillating, but maxes at about 125 MB/s
so i'm wonder if the backup is so slow that the disk has not much to do OR the disk sucks and therefore the backup takes so long
the same virtual disk is used by the aiarchivemanager btw (757 ai Files per day at 8-10 GB total)
1704202551446.png

actual cmd (called via script)
MyDLCDir\probkup online myDb F:\myBackupDir\myDb.bck -verbose -Bp 64
 
Last edited:

Stefan

Well-Known Member
hmm
the 90 min were wrong (wishful thinking), usual duration is 160 min :(
so i started a backup after removing the last one, but so far it won't make a big difference
around 20 k Blocks/10sec @ 176 M Blocks total -> 135 min
View attachment 2998

Disk transfer rate is oscillating, but maxes at about 125 MB/s
so i'm wonder if the backup is so slow that the disk has not much to do OR the disk sucks and therefore the backup takes so long
the same virtual disk is used by the aiarchivemanager btw (757 ai Files per day at 8-10 GB total)
View attachment 2999

actual cmd (called via script)
MyDLCDir\probkup online myDb F:\myBackupDir\myDb.bck -verbose -Bp 64
If you are starting the backup using the Windows Task Scheduler beware that this defaults the task priority to low, killing backup performance even when the system is doing nothing else - see Progress Knowledge Base article P169922
 

parttime Admin

New Member
Thank you Stefan,

That could at least explain the rather long time that Cringer did mention. So probably nothing wrong with disks and so.
Bit weird though that Priority is already 3 (checked the export xml)
Actually the daily backup IS a kind of low priority process and should not slow down routine operation.
But good to know in case a backup is needed fast, it has to be done per cmdline. Going to test this some quiet afternoon...
 
Last edited:
Top