Question New To -b2

ron

Member
Hi ....

I've known about -B2 for a while - and now I'm about to get my hands dirty. I'd be grateful for a bit of help from those who have experience with this feature.

I understand the proutil enableB2 and disableB2 commands. Once an area is "enabled" - does that attribute stick? I mean does it persist through shutting-down and starting-up the DB broker? How about through a dump/reload?

I have read that the Schema Area is a fair candidate for -B2. Is that correct?

Ron.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
It would be helpful to know your current version and any imminent upgrade plans. Alternate Buffer Pool (ABP) is a pretty solid feature but there are one or two version-specific gotchas.

Also, it matters what your structure looks like. If you're considering ABP then you're on 10.2B or later so you should also all application tables, indexes, and LOB columns in Type II storage areas. And if you do then you have more flexibility with ABP assignments. You can assign entire areas to ABP but you can also assign at the object level for objects in Type II areas. For Type I areas not object-level assignment is possible. You can only assign the entire area to ABP.

Yes, the schema area is a good candidate for ABP: it's small, pretty static, and often heavily read.

Once an area is "enabled" - does that attribute stick? I mean does it persist through shutting-down and starting-up the DB broker?
Area-level assignments are stored in _Area which is located in the control area (dbname.db). Unlike some other area attributes however, ABP assignments are not reflected in the structure file. Object-level assignments are stored in _StorageObject and are reflected in a schema dump (.df). Such .df files have a "bufpool=yes" entry in the trailer and they have 'BUFFER POOL "Alternate"' in UPDATE TABLE statements.

I think it's important to know these plumbing details so you can reason about the caveats and why they exist.
  • ABP assignments, both objects and areas, stick through a DB restart.
  • In a dump & load you create a new target from scratch with a .st, .df, and data files. For the reasons stated above, in this scenario object-level assignments stick (assuming of course they're in the .df) and area-level assignments do not. Make sure you update your D&L checklist and add "compare ABP assignments on source & target", e.g. by diffing proutil viewB2 reports from each. I've seen people allocate memory to -B2 and fail to realize they had no assignments in their newly-created DB.
  • In a prorest (of a backup with ABP assignments) overtop of an existing DB with area-level assignments, the control area is retained so area-level assignments stick. And as _StorageObject is one of the tables that was backed up, object-level assignments also stick.
  • In a prorest (of a backup with ABP assignments) that is not overtop of an existing database, object-level assignments stick and area-level do not.
  • If you are upgrading your database schema, e.g. from one version of an application schema to another, be careful of how you create the incremental .df. For example if you do it by dumping the prod .df and diffing it with a version master schema-holder (that may not have ABP assignments in it), the resulting incremental .df will contain UPDATE TABLE/INDEX statements to remove the object-level ABP assignments. Fortunately these are fairly easy to find as they come at the end of the .df. Just as you should watch for potential data-loss statements like DROP TABLE and DROP FIELD in incremental .dfs, you should also watch for BUFFER POOL "Primary" in UPDATE TABLE/INDEX statements.
  • ABP currently isn't available, and -B2 has no effect, in an OE Replication target.
Area-level assignments are done, as you know, with proutil enableb2; this must be done offline. Object-level assignments can be done in two ways: with the (tedious, confusing, fairly awful) Data Dictionary interface, or by creating a .df with the desired UPDATE TABLE/INDEX statements and applying it to the DB. Either approach can be done online or offline.

There are two performance benefits to using ABP: reducing LRU latch contention in the primary buffer pool and (hopefully) ensuring that the objects or areas you assign remain memory-resident in the ABP, even if you run queries against very large tables that would otherwise flush that valuable data out of memory.

Since 10.2B06/11.1, there is an easier and more comprehensive way to reduce LRU latch contention: use the -lruskips parameter. You can start your DB with -lruskips 100 and reduce your LRU latch locks to a fraction of what they used to be, given the same logical I/O. Nice and easy. So I would say that reducing latch contention shouldn't be your primary motivation for using ABP. But if configured appropriately it will still provide some marginal benefit there, so that's nice.

The primary reason for using ABP is to keep "hot" data memory-resident. I said "hopefully" above because it is up to you, the DBA, to ensure this is the case and remains so. This means you need to do some planning, specifically about what data to assign to ABP and therefore, worst-case, how large -B2 needs to be. It's not as easy as it should be or could be, and you should revisit your performance metrics in promon from time to time to ensure -B2 is still large enough.

What to assign to ABP:
  • the schema area
  • small, static, frequently-read tables; many applications have small reference tables (countries, tax codes, companies, etc.) that are very heavily-read
  • indexes of those small hot tables
  • TDE-encrypted tables and indexes, if they are heavily read; keeping the data in memory minimizes the overhead of encryption on disk write and decryption on disk read
  • (perhaps) the relatively small and slow-growing indexes of large and heavily-read, fast-growing tables that are themselves too large to assign to ABP
How to find heavily-read objects:
  • Set the statistics startup parameters (-tablerangesize/-indexrangesize) high enough, i.e. above the highest-numbered application table and application index numbers respectively.
  • Read the data in _TableStat and _IndexStat. Analyze it in descending order of reads to find the most-read objects. Cross-reference this with record counts from a dbanalys report so you have an understanding of table sizes. Also look at the create/update/delete activity to get a sense of which tables may be growing over time, and at what rate. For this purpose it's valuable to have several recent reports, perhaps monthly, so you can look at growth trends.
Conduct a sizing exercise:
  • Schema area: easy; get its size on disk and divide by DB block size. Add that to -B2.
  • Any other area: get its area HWM (in blocks) from _AreaStatus._AreaStatus-hiwater. Add that to -B2.
  • Indexes: easy (if a bit tedious): get their size on disk, in DB blocks, directly from a dbanalys/idxanalys report. Add all those numbers to -B2.
  • Tables: very tedious! There's no block count in dbanalys, unless the table is alone in its own area, which is unlikely for ABP-candidate tables. My approach in v11 is:
    • Create a copy of the production DB via probkup.
    • Once the tables to be assigned have been chosen, write a program like this:
      for each table1 table-scan: end.
      for each table2 table-scan: end.
      etc.
    • Start the test DB with a fairly large -B.
    • Connect to it in a procedure editor, load the program.
    • Note the value of "used buffers" in promon R&D 1 7.
    • Run the program.
    • Note used buffers again, subtracting the first value from the second.
    • Add this to -B2.
The sum of these numbers is your minimum -B2. Increase that number by some factor to allow for growth of those objects over time, say 20%. You can pick a number you're comfortable with based on your CRUD stats and how often you'll be checking in on this DB. Remember that if you let the total number of blocks read in these objects exceed -B2 then you will have (a) blocks evicted from ABP and (b) the LRU replacement mechanism enabled in ABP (and contention for the LRU2 latch) until you increase -B2 and manually disable the mechanism in promon.

There's more to say but that's the thrust of it. I did a presentation on ABP last year at my local PUG meeting. It would be worth it for you to review it to get up to speed and to learn a few tips and gotchas. This is a link to the slide decks and audio recordings from that day in one zip file. It contains my presentation and the others (on different topics) by Tom Bascom, Mike Furgal, and Rich Banville.

Let me know if you have any other questions.
 

ron

Member
Rob - thank you for such a detailed and informative reply! I will go through all of that carefully to make sure I understand if anything creates any problems for me.

The existing DB is 10.1C on Solaris - and we plan to migrate to 11.6 over the next few months. (The time will depend on resources for UAT.)
 
@Rob Fitzpatrick - I had the same question as we are migrating from v9.1E to v11.6. We have lot of small tables clubbed into a single area and not all tables would require a -B2. How do I map -B2 with specific tables? (an example would be really helpful)

I am not able to download the presentation you have mentioned on a link, not sure if it's been moved. It would be of great help if you could share the location where I can get that ppt or pls do upload the file here, if possible. I know its a big ask, sorry for it Rob.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
How do I map -B2 with specific tables?
As noted above, you have two assignment options for ABP:
  • Assign entire storage areas (Type I or Type II)
  • Assign individual storage objects, i.e. tables, indexes, LOB columns (they must be in Type II areas)
For objects, you have two ways of assigning:
  • Data Dictionary | Admin | Alternate Buffer Pool | Alternate Buffer Pool Maintenance
  • create a .df with the necessary UPDATE statements to change assignments for your chosen objects (I prefer this approach)
Here's a program I wrote to create a .df, given a list of tables. It assigns all of those tables and all of their indexes to ABP. You can edit the resulting file as necessary.
Caveat: this was written by a DBA so it is of dubious quality and has minimal error-handling. ;) The table list is hard-coded because I was lazy.

Code:
/*  abpassign.p
*
*  Given a list of table names, generate a .df file that assigns those
*  tables and their indexes to the Alternate Buffer Pool.
*
*  It is easier to assign objects this way than to fiddle with the
*  difficult Data Dictionary UI for assigning objects.
*
*  This .df can be applied to the database online or offline.  Note
*  though that if it is applied online, blocks for assigned objects that
*  are already memory-resident in the Primary Buffer Pool will remain
*  there until until the database shuts down, or until they are evicted
*  and reloaded.
*
*  NOTE: This can only be done when the objects are located in Type II
*        storage areas.  It is assumed that this is the case.
*  NOTE: The Schema Area is Type I and so must be assigned via proutil
*        dbname -C enableb2.
*  NOTE: After the .df is applied, assignments should be verified with
*        proutil dbname -C viewb2.
*  NOTE: It is assumed (hard-coded) that cpstream is ISO8859-1.
*
*  Rob Fitzpatrick
*  06/23/2015
*/

def var v-tables as char.
def var v-table as char.
def var i as i.

/* change this line as necessary;
   be sure to double-check the spelling of tables names! */
v-tables = "customer,order,order-line".

output to value( ldbname( "dictdb" ) + "_abp.df" ).

do i = 1 to num-entries( v-tables ):
  v-table = entry(i, v-tables).

  find dictdb._file no-lock where _file-name = v-table no-error.
  if not available _file then next.

  put unformatted "UPDATE TABLE " + quoter( v-table ) skip.
  put unformatted '  BUFFER-POOL "Alternate"' skip(1).

  for each dictdb._index no-lock of _file:
    put unformatted "UPDATE INDEX " + quoter( _index-name )
      + " OF " + quoter( v-table ) skip.
    put unformatted '  BUFFER-POOL "Alternate"' skip(1).
  end.

end.

put "." skip.
put "PSC" skip.
put "bufpool=yes" skip.
put "cpstream=ISO8859-1" skip.
put "." skip.
put "0000099999" skip.

output close.

The output looks like this:
Code:
UPDATE TABLE "customer"
  BUFFER-POOL "Alternate"

UPDATE INDEX "Comments" OF "customer"
  BUFFER-POOL "Alternate"

UPDATE INDEX "Country-Post" OF "customer"
  BUFFER-POOL "Alternate"

<etc.>

.
PSC
bufpool=yes
cpstream=ISO8859-1
.
0000099999

Once you have the file created and checked for content, you apply it to the DB as you would any other .df. Does this answer your question?

It would be of great help if you could share the location where I can get that ppt or pls do upload the file here
The audio recording of the session is 60 MB so it's too large to upload to this site. But I have uploaded the slide deck as a resource:
Alternate Buffer Pool - Theory and Practice
Let me know if you have any questions about it.
 

ron

Member
Back to B2 after a few years!

Now on:
Linux 3.10.0-1160.2.2.el7.x86_64 x86_64
OE 11.7.4 Enterprise

I have re-read Rob's very informative post (above) -- and a number of Progress Articles -- but a few issues are still not clear to me.

We do online backups twice each day specifying -Bp 50. It appears that this will leave the LRU2 alternate buffer replacement policy ENABLED. If that is the case I would need to DISABLE it immediately after each backup. As far as I'm aware that can only be done with promon (R&D -- 4, 4, 3.). Is that correct? Can it not be done via a VST??

(I understand that the problem is fixed in 11.7.5.)

Ron.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
It appears that this will leave the LRU2 alternate buffer replacement policy ENABLED.
This is bug OE00208701/PSC00234422. I am rather surprised that, according to the KB article, it was re-introduced in 11.x and exists all the way up to 11.7.4. If that is true then your statement is correct. You can check the LRU2 replacement policy status in several ways:
  • in promon R&D 2 3
  • in promon R&D 4 4 3 (the first word toggles between "Disable" and "Enable" depending on the current status)
  • there is a message written to the DB log when the policy status changes:
    "LRU on alternate buffer pool now established"
  • programmatically:
Code:
find dictdb._actbuffer where _buffer-id = 3 no-lock.
display _buffer-lruenabled.

As far as I'm aware that can only be done with promon (R&D -- 4, 4, 3.). Is that correct? Can it not be done via a VST??
It can be done via promon; that can be scripted if you wish. Though in my testing in 11.6 you can also update _buffer-lruenabled programmatically. Just make sure you read the correct record (_buffer-id = 3)!

Trivia/advice:
For the menu option "Enable/Disable LRU2 alternate buffer pool replacement policy" in R&D 4 4, the numbering can change. It is typically option 3 in that menu. But when you use the "secret" command debghb at the R&D main menu, some of the screen and menu layouts change and so this option becomes number 7, not 3. Also, later in v12 it becomes option 7 by default.
The above is "trivia" if you don't use this option, or if you proceed slowly through the menus and read item descriptions. It is "advice" if you tend to fly through the menus with muscle memory by specify option numbers, or if you create promon input scripts. The latter use menu option numbers as input.

Advice part 2:
In 11.3 I had a heck of a time dealing with ABP and LRU2 replacement policy at a client site. I could change the policy from enabled to disabled, check that it was done correctly, then check 10 minutes later and find it was enabled again. Don't assume that if you disable it, it will remain that way all day.
 
Hi
I had the same issue with ABP and LRU2 on our cloud.

It was due to SQL request which could read the empty blocks of a table (affect to the BP not ABP) which are in the last cluster (8 , 64 or 512 locks) of this table.
The SQL engine put them in the ABP .
When i changed the policy (disabled) there were some blocks/records of table affect to ABP which were not in the ABP , reading them evict an empty blocks of the ABP and enable the LRU2.

2 workarounds :
- oversize the B2 to have enough room for the empty bocks of table and index which are hlod in the -B.
- set -BP to SQL connection (UPDATE pub."_MyConnection" SET "_MyConn-NumSeqBuffers" = 6; COMMIT;)

Regards
Patrice
 
Hi Tom,


I did a test on several hundred Db on our cloud that are using this SQL treatment, there are no real difference on DBread between "-Bp 6" and "-Bp 1000" (less than 1%), this is because :
- We are doing sequential read to populate a datawarehouse.
- there is no join, the index used have very good "logical scatter factor" (we use this index to populate the table when we dump-load).

The total -Bp of all client sessions is limited to a maximum of 25% of -B, it could be have 5 SQL process runing at the same time on a DB.
Some of our Db have very little -B (little DB size : 1-2 Mb, our smallest -B is 150) .

The "datawarehouse" team want a value for this parameter for all the db that use this SQL treatment.
(We have more then 12 000 Db on our cloud and they do not want to set different values for each DB. )


6 is a good value for us on our cloud on this SQL treatment (work for all our DB and no performance issue).
Of course this is not true for everybody and an analyze of the -Bp must be done to set it correctly .

Patrice
 
Top