Table repair question


I am trying to retrieve some data out of a couple of tables from an old Progress 10.0B database which was then fixed to work in Progress 10.2B (of which I have the Evaluation copy at present). I have managed to get at 90% of the data but there are 14 tables which appear to elude me. I am using the Progress 10.2B ODBC driver in Windows Server 2003 Enterprise Edition.

I have tried to download the data in both R and DTS Wizard (SQL Server) the first of which crashes when I try to access the data, the second just throws numerous messages and refuses to get any data.

Here is what I have done so far:

6 tables, I cannot read any data from at all. I can see the structure using Data Administration but cannot dump the data to file as this is not allowed in the Evaluation copy (hence my use of ODBC to pull the data out).

8 tables I cannot read all the data all at once but I can read some of the initial records, a different number per table.

On all 14 tables I can get the number of records in the table.

Both of these seem to me like the tables are corrupted somehow.

Running proutil dump on them I can download binary dumps of most of the tables which is a bit strange. Although on 3 I get messages such as

"Could not find record. Recid 11297795, error number 8, return -1. (12011)"

These 3 tables were repairable running the index (proutil [db] -C fixidx). But the rest are still inaccessible using ODBC.

Can someone tell me if there are any table fixing routines which could fix these issues?


"fixed to work with"???

What was broken and how was it "fixed"? That might shed some light on the source of the problem and thus the solution.
Thanks for your reply. Not sure what you mean though, I could find the string "fixed to work with" in my message.

So far I have found that I can access certain fields and certain early rows so am writing a function to go row by row until it hits the one that causes the exception. I still have not found anyway to repair the actual data table rather than the indexes.

tamhas Sponsor
"Fixed to work" is in the first line. It could mean anything from that you just installed and started use to having done some repair.

What error messages are you getting?
Aha - you're quite right - so much for search facility in Opera. Anyway, I did the following to get the database to work with 10.2B,

1) cd %DLC%\bin\101dbutils
2) set DLC=<path to the DLC installation>\bin\101dbutils
3) set PROMSGS=%DLC%\promsgs
4) set PROCFG=%DLC%\progress.cfg
5) set PROCONV=%DLC%\convmap.cp
7) set PATH=%DLC%:%PATH%
8) cd <directory with database>
9) 101a_dbutil <database name> -C truncate bi
10) 101a_dbutil probkup <database name> <backup file or device>

After this I was able to open the database using Data Administrator plus was able to get a connection to it through ODBC.

I am not getting any error messages, just a segmentation error when I try and recall the offending line or one after it.


tamhas Sponsor
OK, so "fix" did not involve any repair. So, what about the error messages? E.g., are there perhaps hyphens or other illegal characters in the problem file names? If so, have you tried enclosing file and field names in double quotes?
The tables all have legal filenames if thats what you mean. I get table data via the following SQL:

SELECT * FROM PUB.\"tablename\"

If I do, for example:

SELECT TOP 1 * FROM PUB.\"tablename\"

Then I get 1 row.

However if I do say

SELECT TOP 100 * FROM PUB.\"tablename\"

then I get a segmentation fault and have to close the application down.
Attempt to access table through ODBC from R. Message from Visual Studio JIT debugger:

An unhandled win32 exception occurred in Rgui.exe [12720].

Attempt to access same table with DTS Wizard error:

Error 0xc02090f5: Data Flow Task: The component "Source - Query" (1) was unable to process the data.
(SQL Server Import and Export Wizard)

Error 0xc0047038: Data Flow Task: The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02090F5. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
(SQL Server Import and Export Wizard)

Error 0xc0047021: Data Flow Task: Thread "SourceThread0" has exited with error code 0xC0047038.
(SQL Server Import and Export Wizard)

Error 0xc0047039: Data Flow Task: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
(SQL Server Import and Export Wizard)

Error 0xc0047021: Data Flow Task: Thread "WorkThread0" has exited with error code 0xC0047039.
(SQL Server Import and Export Wizard)

tamhas Sponsor
Have you tried using the Progress sqlexp utiliity?

I was hoping for a Progress error number, but those all seem to be something else.
Unfortunately it does not give me a progress error number. I have just done another test and I can retrieve all data apart from 1 field per table. So I could grab the data one column at a time I suppose.

Thanks for the tip about sqlexp utility which I had not found yet.

Did SELECT * FROM PUB.[table]

=== SQL Exception 1 ===
[DataDirect][OpenEdge JDBC Driver][OpenEdge] Column notes in table has value exceeding its max length or precision.

Any of fixing this?

Casper Moderator
Staff member
error in max length or precission can be fixed by running dbtool )option 2). still also wise to run update statistics:

update table statistics;
update index statistics;
update column statistics;
you must run update statistics with dba rights (you can use the default sysprogress dba acount for that or any other user with dba rights).

Thanks for your reply.

Am trying to do this but get the following when I do the update index or update column lines:

=== SQL Exception 1 ===
[DataDirect][OpenEdge JDBC Driver][OpenEdge] Lock table is full. (7870)

I tried changing my ODBC isolation level to READ UNCOMMITTED and restarting sqlexp.bat but it still happens. Is there a way to check on the command line that the READ UNCOMMITTED is set and has not been ignored?

tamhas Sponsor
You are getting this from update statistics?

What is the -L on the database?

What does it say in the database log?

You have sqlexp working now? What error message do you get accessing the problem table/field from sqlexp?
Just set in %DLC%\ the line "-L 15000" and restarted the server in the Progress Explorer Tool. Restarted sqlexp and am still getting the Lock table is full error. Checked log file, it said to increase the -L number so I put it up to 50000.

Restarted server and sqlexp.bat and still get:
[2010/08/08@22:55:35.812+0100] P-8356 T-9552 I SRV 3: (915) Lock table overflow, increase -L on server

Is there anyway to query this from sqlexp.bat so I can see that the new lock number has taken effect?

@tamhas: The message I got when I did a SELECT statement in sqlexp.bat was:

=== SQL Exception 1 ===
[DataDirect][OpenEdge JDBC Driver][OpenEdge] Column notes in table has value exceeding its max length or precision.

tamhas Sponsor
Re the lock table, you can look in the database log where the server starts and it will tell you what the values are for all the parameters it set.

The message you are getting on the SELECT indicates that the contents of the Notes field exceeds the SQL Width for the table. Use dbtool like Caspar indicated to fix the width to match the data. This is pretty common for things like Notes fields since the default display format in the dictionary will be set to something like 40 or 80 so that one can do simple for each statements and get something to print, but actual use in the application connects that field to an editor widget or something which is taking a considerably longer string.
Manged to update lock files in Progress Explorer Tool under default configuration. Managed to do all update commands.

Unfortunately I still get

=== SQL Exception 1 ===
[DataDirect][OpenEdge JDBC Driver][OpenEdge] Column notes in table has value exceeding its max length or precision.

when I try and access the records of this table.

In the log it does come up with lines such as:

[2010/08/08@23:10:40.656+0100] P-4136 T-5804 I SQLSRV2 1: (9378) Progress/SQL Update Statistics:Table PUB."lc" at Rowid 0000000000004664236 has Column disposaltype whose value exceeding its max length or precision

Not sure if it does anything but detect it though.

Feels like I'm getting close but still no cigar...