Interesting questions on this topic... Techniques and links are at the bottom of this post.
a) Binary is the fastest possible way to DnL you db. If you see slowdowns in this process (Re: The first post from dopena) then look at system resource usage. Are you directing the dump to a different disk? Are you running anything else that is competing for CPU? Is (shudder) the disk you are writting to an appliance and perhaps being used by other people.
b) You still need to do a index rebuild after a binary load.
c) You can do multiple binary DnL operations at the same time, but you need to carefully consider if you want to do it, or need to do it. The trick is to know when you are going to saturate your critical resource, usually disk speed, and then step back just a bit. If you spread each dump operation to a different disk, and have a manager for the whole operation kicking them out as needed, you can perform some really fast database rebuilds. You also can reload to a new db (if doing all) on a different disk, dumping from one db while loading to another (different tables of course!) at the same time.
d) If you are doing the DnL for performance, you can usually get most of the benefits by just re-indexing!
e) RAM disk is cool, but you can run into some really monster tables sometimes, and that is a lot of RAM.
f) As Peter de Jong pointed out, there are scripts out there to demonstrate a multi-threaded DnL approach. They may also incorporate the v9 index rebuild operations. If you are using multiple threads you should be aware that this v9 index rebuild is single threaded, and smacks the BI each time, you will be doing your index rebuilds at the end! For a single stream you could still do the index rebuild after each load.
Techniques:
In general I'm just tossing out loose information here, there is a lot of things to consider when doing binary DnLs for both safety and optimization. Although you can write scripts and perform this as a largely automated process (some companies schedule these on a regular basis) the first couple of times will be rough, there are a lot of things to think about. Investigate lots of sources before trying this (links at bottom)
In vN to v7:
Use the proutil <dbname> -C dbrpr option, option 1 (Scan menu), suboption 6 (Dump Records) to dump the db. The size of this dump will be between 60-70% of the base db size. Suboption 7 allows the load. After you are done you need to re-index ALL your active indexes.
In v8:
Use the proutil <dbname> -C dump <tablename> <dumplocation>.
In v9:
Like v8, but also adds in the ability to rebuild the index on the fly with the proutil <dbname> -C load <tablename> operation.
Links: In the Progress knowledgebase there are some really good articles -
12994 Binary dump and load (dbrpr RM Dump and Load) instructions (Must read!)
20206 Building Indexes with Binary Load in Progress Version 9.1B
17528 Binary Dump With Tables Larger Than 2 GB
43744 Binary load did not load past 2 gigabytes
18008 4GL to Create Binary Dump/Load Scripts From Metaschema
Others: Yikes! I couldn't find some nice links for more details!