Question Type II cluster size and D&L vs. Multi-threaded For Each

As posted previously, we are in the process of converting one of our customer's 10.2B to 11.3.x (Redhat). The DB will reside on SSD's, bi's and ai's on separate disks and we have 24 cores and 32GB of memory. We will have a weekend to do all that needs to be done, and the db must be up and running on Monday AM.

After doing some analysis of the current db and using Tom's power point on Type II setups, I am considering moving some of the tables from the type I area to their own type II's for better performance as well as changing some of the existing Type II's records per block settings. No problems there.

However, I'm not sure about the cluster sizes for the Type II areas. I've read the kbase articles as well as Tom's ppt, but it's not clear what makes sense. Some of these tables grow quickly (e.g. sales orders), some are rather stagnant as far as growth goes.

Question 1 is what is recommended for setting cluster sizes for active growing tables vs. tables that have lots of reads / updates ?

Question 2 is are there any benefits to making a small table with lots of reads (few updates) a Type II vs Type I with it's own storage/index areas?

The other thing I'm trying to determine is the load of the new db.

I can easily restore the 10.2B db on the new server, then convert to 11.3.x. This has been tested so I have an idea of the timing.

To move to the new db structure, I will have an empty db with the .st file with the layout I will use.

Question 3 is the preferred/fastest method to load the new db. There are two methods I am considering:

1) The usual binary dumps and binary loads / index rebuild


2) Connect to both db's (using sizable big B's etc), then multi-thread load of the tables using For Each / Create Table / Buffer Copy.

Any comments or suggestions would be most appreciated.

Cringer Moderator
Staff member
As far as I'm aware all of your tables should be in Type II storage areas. Other than that I can't give much advice. Sorry.


Well-Known Member
  1. If you are not sure which cluster size is approrpiate use 8. Use 512 only for very large tables when they reside in their own storage area. So far I've never used 64.
  2. There is absolutely NO point in keeping some type I storage area. The performance gain your will get with type II storage areas beats possibly wasted space by far.
  3. I would definately go with the binary dump and load approach - it will definately be much faster.

Heavy Regards, RealHeavyDude.


Well-Known Member
Regarding storage area type I - forgot: Some features added into the database engine since OE10 require storage area type II. Features like auditing and multi-tenancy ...

Heavy Regards, RealHeavyDude.
FWIW -- when we originally configured the type II's for this db waaay back when I admit I did not have a full understanding of settings and the benefits of Type II's, so relatively static files were left in the type I data area. Since we are moving now to this new server, I am revisiting the structure, and moving all active tables to their own storage areas (we have a number of tables in the db for all of our apps but each customer can pick and choose what apps, and this customer uses some but not all with no need to use some of our additional apps, and I was just going to leave those inactive tables in the type I). And we aren't using Progress auditing or multi-tenancy for this customer nor will we as we have our own auditing and ability to multi-tenant already in place.

Curious about index areas though.... how does one compute the RPB for them? Most of our tables have mutliple indexes.

And thanks for the feedback!

Cringer Moderator
Staff member
Indexes should be in separate areas to tables and you can calculate the RPB for them similarly to that of tables using the DBAnalys output. The attached code doesn't work in v11 because the dbanalys output has changed, but you can see the principles of what it is trying to achieve.


Rob Fitzpatrick Sponsor
In future, if you have LRU latch contention, you may want to put small heavily-read tables (and their indexes) into the alternate buffer pool. You can't do that at the object level if the objects are in Type I areas. Same goes for TDE, horizontal table partitioning, etc. etc.

Type II areas also make object utilities run faster. There is *no* good reason to leave objects in Type I.

Rob Fitzpatrick Sponsor
For D&L, my answer is test, test, test (and document results). Whether you are using 10.2B or 11.3, make sure you are on the latest service pack. It matters a lot for idxbuild performance. And make sure you understand how to use the new (10.2B06+) idxbuild params.
You read my presentation. Everything should be in type 2 storage. No exceptions. That includes no exceptions for laziness.

If you think you have an exception please post the details (in a separate thread). Be prepared to be disappointed.

I use 8 blocks per cluster for tiny tables. 512 for anything that is more than a couple MB.

I occasionally use 64 for medium size indexes.
One thread two topics?

Connect to both db's (using sizable big B's etc), then multi-thread load of the tables using For Each / Create Table / Buffer Copy.
That can be attractive in some cases. Someone might cover that at the Ontario PUG meeting next week ;)

Your assumption about using sizable big B is probably mistaken.

But unless you've got a lot of time available to test various scenarios and alternatives you are probably better off going with the binary approach. It is relatively simple and effective and is usually faster. Especially with the new idxbuild options.
I must apologize for both my ignorance on how and when type II"s were used as well as the multiple topic post. I was always under the mistaken impression that type II areas could only contain 1 table per area. Looks like I'm way off base there and 100% wrong. (But yes, there are some tables that will be kept in their own areas for data and indexes).

Tom Rob RHD Cringer -- that is the ONLY reason I was going to have any type I's. I will be setting everything to type II.

And Tom, thanks for the guidelines on the cluster sizes. Since I don't live that far north to attend the PUG meeting, I'd be really interested in your "attractive in some cases" seminar. I'm always willing to learn something new. :D

Cringer Moderator
Staff member
It's funny isn't it the misunderstandings we can have about stuff. There's so much to know about Progress. Presumptions and assumptions creep in all over the shop. At least you were wise enough to ask for advice. Many don't and then you end up with shoddy code or architecture or both.
Cringer, it is indeed. I'm old enough to know that there is plenty I don't know even though I've been in this business for 40+ years, 15 years with Progress.

Used to be the PEG as the goto place, but I don't have the bandwidth or time for all that traffic. This thread (and others) keep me coming back here, asking advice on things I don't know, and helping when and where I can. Can't tell you all how much I appreciate the help and breadth of knowledge the experts here provide.


Well-Known Member
You're welcome, don't apologize - one can learn something new every day. At least you are not advice resistant. When I first started using storage areas when they were introduced with V9 I myself thought that it would be a good idea to group the tables by their function - application modules. But that was before I had my first DBA training during which I recognized how wrong I was - thanks to a really good trainer ...

Heavy Regards, RealHeavyDude.

Rob Fitzpatrick Sponsor
Larry, you may also get some benefit from the video of Tom's presentation of Storage Optimization Strategies from Exchange Online 2010. I've lost count of the number of times I've watched it. There's a lot of material to absorb and it took me a while to understand it all when I was first learning about Progress DB storage. There is also some interesting Q&A at the end.

Regarding index rebuild, there have been some conference sessions in the last couple of years that have gone into the details of the newly-added optimizations.
Rich Banville from PUG Challenge Americas 2012:
Still More Database Performance Improvements

Wei Qiu from PUG Challenge Americas 2013:
Index Rebuild Performance
audio (alternate):

Rob Fitzpatrick Sponsor
When designing your new structure it would be very helpful if you could capture table and index CRUD stats from the existing 10.2B database. Do it a few times so you can capture the deltas to get an idea of which objects are heavily read or fast-growing. Capturing this info requires you to have the -tablerangesize and -indexrangesize parameters set properly, i.e. set above the highest table number and highest index number respectively. It also requires some ABL code to capture the stats.
Rob, I wish I had known of Tom's video presentation because that explained the Type II layout and how it works beautifully. (Thanks Tom!). I had reviewed his power point presentation, but I must admit the video made it extremely clear. (and there was even a rare sighting of Gus!).

I'm also reviewing the others as that will help immensely with my testing.

I've totally reworked our .st, and will be getting the stats for the next week so I can refine it.

Amazing how much info is out there but yet not easily found.

You are as always a font of knowledge and pointers and links et al, Rob. Much appreciated for your input. As well as Tom, RHD, Cringer, and the whole helpful crew here.