[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: A hundred million rows?

Status
Not open for further replies.
D

dbeavon

Guest
I reported this case to Progress. My table is under 100 million rows but it is still quite difficult to manage. If Progress doesn't intend for large tables to be managed via SQL statements then hopefully that can be documented in the KB. I've never heard anything to that effect. >> If I had a 10 hour outage window to work with I would add the index inactive and do a simple offline proutil idxbuild Would it change the answer if you had to break replication and re-enable it again (and your database was in the ballpark of 1 TB?) This becomes a difficult compromise because the idxbuild runs so fast, but having to re-enable replication is a pain (although that can be done outside of the outage window). >> PUB.table 24,987,814,259 I would still love to hear more about this table! I believe you ... but it seems very hard to envision how we could ever manage a table like this ourselves. And I want to get the whole story in case I ever need to retell it to someone. The main thing I'm wondering about is how many years of data this includes, and whether you keep a window/range of dates or if the data grows indefinitely. I'm also wondering if you would consider simply archiving or deleting much of this data in the event of a schema change (like a new index, or a new column of data that was derived from other pre-existing columns).

Continue reading...
 
Status
Not open for further replies.
Top