Slow local DB Copy

Cringer

ProgressTalk.com Moderator
Staff member
I run a copy of our db from a portable hard drive for development and testing purposes. This all works absolutely fine, except it all runs pretty slowly at times, particularly when running in debug mode from OE Architect. Now I know this will be partly to do with the performance of my machine itself, but I'm sure some of it is latency in reading the drive. I was just wondering if there were any startup params for the dbs that I could look at to see if they made things quicker at all. Running OpenEdge Release: 10.2A03. Let me know if you need the current settings on the db...
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Is there a reason why you can't copy the db from the portable drive to your local drive?

Also, what kind of interface do you use to connect to the portable drive?
 

Cringer

ProgressTalk.com Moderator
Staff member
I can't just copy it to my local machine as I work from home 2 days a week and the purpose of the drive is that I can work from the office or from home using the same DB. It's a USB 2.0 drive (300GB). I realise that this isn't going to be the most efficient way of accessing the system due to data transfer speed and latency, but at the moment it's the only option (we have around 200Gb of db and I haven't got round to writing scripts to trim down the data after updating my copy from live :( ).

Tom, when you say the 'usual stuff' what do you mean?

Thanks for the responses.
 

RealHeavyDude

Well-Known Member
IMHO: When looking into tuning database performance from the hardware point of view you have to deal with 4 hardware resources: CPU, memory, persistent storage and network - in the order from fast to slow. In your scenario you could discard the slowest, network performance, as I assume that you connect to the database via shared memory. That leaves the persistent storage - in your case the USB 2.0 disk - as the slowest hardware resource in your scenario. You will gain the most effect when you tune the slowest resource - which in your case means you should avoid as much disk i/o as possible. Your best bet will be a -B as big as possible to load as much of the database as possible into memory. Other than that you can fiddle with other settings but I don't think you will gain anything noticeable without investing a lot of time.

Heavy Regards, RealHeavyDude.
 

Cringer

ProgressTalk.com Moderator
Staff member
Cheers RHD - what's the best way to determine the biggest setting for -B?
 

RealHeavyDude

Well-Known Member
It depends whether you are running on a 32Bit or 64Bit OS and probably the type of database license you are using. For sure you can't use more then 2GB memory address space on a 32Bit OS - which leaves a little bit less for -B ( as there are other things Progress loads into memory apart from the buffer pool - although usually it is by far the biggest ). -B is always calculated with the block size of the database. If your database has a block size of 8K you can divide 2GB by 8KB and you will get roughly the max -B ( it is a little bit less as mentioned before ). You can only exceed that limit on a 64Bit OS - but, I think, anything other than a enterprise database license will also limit you to the 2GB address space ( same is true for the 2GB file size limit ). The other limit is the amount of free RAM you can devote to that on your machine.

Heavy Regards, RealHeavyDude.
 

RealHeavyDude

Well-Known Member
Forgot: The type of database license you use and whether large files are enabled on your 200GB database might be a show stopper. Because, once enabled you can't revert to "small files". Effectively that means that you can't run a database that has large files enabled with anything else but an enterprise database license. The only way to work around that issue is to dump & load into a structure that doesn't use large files.

Heavy Regards, RealHeavyDude.
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks RHD - our system is comprised of 5db's, although the master one is the most critical. I'll have a play with the -B.

Out of interest the licenses I have are:

Product Name: Client Networking

Product Name: OE Enterprise RDBMS

Product Name: OpenEdge Architect

Product Name: OE Application Svr Ent
 

RealHeavyDude

Well-Known Member
Since you have the OE Enterprise RDBMS license you are not bound to any limit one could even think of when it comes to the database license ...

Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
Small correction: WG will work with a "large files" db. You just cannot create one with WG.

The restrictions of WG are:
- no -spin
- no page writers
- only 65 users
 

TomBascom

Curmudgeon
As RHD indicates disk IO is the enemy and -B is your friend.

I asked about what sort of external drive because it helps quantify just how bad things might be. With a single drive connect over USB 2 you're going to be pretty limited. How long does it take to get the data on to the the drive?

Is USB 3 available on the relevant machines? You might want to seriously consider an external drive that supports USB 3 if you can. And if money is not an issue get an external SSD ;)

You have an Enterprise license so:

1) Set -B as high as you can get away with on your most active db.

2) Run APWs and a BIW

3) Put your -T files somewhere (anywhere!) other than on that USB drive.

4) You almost certainly have a multi-core processor on you local machine. So set -spin 5000.

5) Consider at least moving your bi files (use "prostrct repair") as well -- it would be a fairly painless thing to accomplish (don't forget to move them back when you are done). The downside to this is that you have to remember to do it whenever you change locations... and if you can remember that why not remember to copy the whole mess?
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks Gentlemen. I will have a play as suggested. Sadly I'm not able to get my hands on USB 3.0 at the moment, and I am limited to this drive for now... :(
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
You can tweak your DBs as above but it is still putting lipstick on a pig, IMHO. It may perform well enough for development but to me testing involves the development of a reasonable understanding of your code's performance characteristics. I have too often seen developers test on a server that is vastly different from the client's, or with a DB that is trivially small. Then, post-deployment, they wonder why the code is back to them for review because it ran like a dog in production.

At least you have large dev DBs, that's good. But if you want to do serious performance testing a consumer drive enclosure isn't going to cut it.

On a side note (per above discussion), I'm not aware of any -B limitation on Workgroup.
 

Cringer

ProgressTalk.com Moderator
Staff member
Thanks Rob - we have a UAT environment that we test major work on that sits on the old production server. It is a copy of live that is usually no more than a month old...

As for tweaking -B, that seems to have had a major impact. It wasn't actually set at all so was defaulting to 3000. I've upped it to 30000 for now and things seem to be a lot smoother. It's very tempting to invest in a USB3.0 drive and a couple of PCI cards, but I shall wait until I go a bit more spare waiting for things before I do that ;)
 

Cringer

ProgressTalk.com Moderator
Staff member
lol it's highly unlikely!! Although at some point I'm going to need a bigger drive than 300Gb, so when that time comes I will show the powers that be that USB3.0 drives are not significantly more expensive that USB2.0.
 

RealHeavyDude

Well-Known Member
Not that it would help me or you and please don't get me wrong: But somehow it makes it easier to deal with "advise resistant powers that be" that I am not the only one who got the impression that he is fighting wind mills.

Heavy Regards, RealHeavyDude.
 
Top