[Progress Communities] [Progress OpenEdge ABL] Forum Post: A hundred million rows?

Status
Not open for further replies.
D

dbeavon

Guest
I'm having trouble with a table that has a total of about a hundred million rows. Is that the limit of what the DBMS is capable of? For example adding new indexes is extremely painful. On top of everything else, I think I've run into a new bug where the OE database refuses to create an active, unique index via a DDL statement. The statement runs for about 20 mins and then the disk rate slowly drops off to zero, but the CPU remains on 100% indefinitely. I knew that adding the active index via DDL was going to take a lot longer than using the proutil/idxbuild... but I didn't imagine that it would run forever! In SQL Server I can do a hundred million rows (per year) pretty easily, especially if I give it a clustered index of my own choosing (usually some type of custom, surrogate key/s). We are using the enterprise edition of the OE RDBMS, but haven't purchased the "table partitioning" feature yet. I suppose that is the next step forward, right? Can someone tell me how my table compares to their biggest table (in a single database *without* partitioning). I'd also like to know how you would suggest adding an index to it, if you were allowed a large outage window, eg 10 hours or so.

Continue reading...
 
Status
Not open for further replies.
Top