Excessive length of offline Probkup - HELP!

ljb1

New Member
Hi All,
Hopefully someone can help me.
My shop currently takes an offline backup (probkup) of a 26Gig DB nightly. It is taking over two hours for this backup. I am in desperate need of reducing this backup time.

We are running v9.1d on a Windows 2000 box. (Not sure what other spec info would be helpful.)

We are not using the -com parameter, which I will be testing out tomorrow to see if that improves the backup time.

Any help/advice/suggestions would be MUCH appreciated.

-ljb1
 

Simon Gaarthuis

New Member
What kind of tape drive are you using, what is it's speed?

Did you expiriment with the -bf parameter of probkup?

Explanation of the -bf parameter according to Progress:
Indicates the blocking factor for blocking data output to the backup device. The blocking factor specifies how many blocks of data are buffered before being transferred to the backup device. NT uses a variable block size up to 4K. For all other operating systems, each block is the size of one disk block (1K on UNIX). The primary use for this parameter is to improve the transfer speed to tape-backup devices by specifying that the data is transferred in amounts optimal for the particular backup device. The default for the blocking factor parameter is 34.
 

ljb1

New Member
Simon,
Thanks for your response.
However, we are not backing up to tape. (Well we are, but that is not the process that is taking so long.) We are backing up to disk. If -bf is primarily to improve disk to tape backup, it will not help. I will look into the possibilities, though....
Thanks again for your reply.
-ljb1
 

tonydt1g3r

New Member
did you happen to find a solution to this problem? CUrrently we have a 22 gb DB and it takes about 2hr's to update this also. I wanna know any possible way to make this backup go alot faster. Any suggestions, or combinations, welcomed. We can spend money to buy whatever is needed, it just needs to be faster.
 

ljb1

New Member
tony,
No, we didn't find a solution to this problem. We, instead, resorted to a live backup, which is done now at the time the cold backup used to be done (less traffic).
If you happen to find a solution, please share.
Since we moved to a live backup, I haven't aggressively been searching for another solution, but would be more than grateful to know of another way.
-ljb
 

BONO

Member
Hello,
Have u check antivirus interaction ?
Your hardware config store .bk on same hdd than the database ?
System configuration on raid 1 or raid 5 (see P102208)?
No other job during the backup ?
Windows tuning (P44323 P38076 P14426)
 

TomBascom

Curmudgeon
The database is 26GB? How big is the backup file?

You're backing up to disk right?

What happens if you just copy the files? How long does that take?

Are you storing the backup on the same disk that the database is on? What sort of disk is it? RAID5 or a single big disk is going to be pretty slow -- 2 hours doesn't sound all that wrong if that's the case. On the other hand if you've got a nice striped disk subsystem with lots of spindles it ought to go much faster than that...
 

joey.jeremiah

ProgressTalk Moderator
Staff member
how about a warm spare approach ?
i mean, collecting the changes
and rolling them on a spare or backup database

you can either use
online incremental backups or after imaging
to collect the changes
and then roll them using the appropriate utility


we use an online incremental backups
which run for a short time at night

and sync's a number of on-site and off-site databases
which also doubles as development, practice environments
but it can also be used for reporting and such
because it is so cheap, quick
it's also not just a copy it's up and running

instead of ai
and the overhead that comes with it
during the day's operations

which is ok for our needs
in terms of availability
incase something terrible were to happen

the backup scheme also
has a simple safety mechanism, mainly through redundacy
if one of the spare computers are turned off for a few days,
they're disocnnected, the power goes down in the middle etc.

and uses hard copies, which basically
copies a compressed full copy for a new media
and then adds incrementals until the media it's replaced

for obvious reasons
and also gives the ability to roll the backup
upto a specified date.

overall it's simple, stable.
hasn't failed yet

and needs no child supervision
besides looking over when ever i get a chance
and changing media every week or so
though onetime i remembered a month later

something that always works
and you don't have to worry about
that's there for insurance, just incase

you can also use progress's fathom replication
to do something similar
with fathom replication


here are the numbers
we're using win2k3 server, oe10b
backup is average at 10 gig
incremental backup takes 20-30 minutes
and rollup is done asynchorous

the whole app is actually pretty simple
something like 10 procedures ( .p's )
5 on the server and 5 on the clients
 

joey.jeremiah

ProgressTalk Moderator
Staff member
oops,
i meant database is average at 10gig, the database

plus full backup is 6gig
and incremental backup is 20meg max, most of the time it's much less
 
Top