blob fields

jennid

Member
I posted this in the dev forum too. We recently upgraded our test environment from OpenEdge 10.0C to 10.2B003. We have a blob field in our database. Our sister company has an application that connects to the OpenEdge db via ODBC and reads the blob field. My understanding is that they take the hex decimal value of the blob and convert it into the string text value. This worked fine in 10.0C. After upgrading to 10.2B and installing the new ODBC drivers, the other application is able to connect to the 10.2B database and can read 1024 bytes of data accurately from the blob. After that, they are telling me that some (but not all) of the additional bytes are missing. I noticed a new field on the 10.2B ODBC drivers calls "use wide character types". We tried turning that on, but apparently that didn't change anything.

Does anyone have any ideas on what the problem might be? I'm told that nothing has changed in the other environment, that the new ODBC driver and the upgrade to Progress 10.2B are the only changes.

Thanks.
 

medu

Member
I'm told that nothing has changed in the other environment, that the new ODBC driver and the upgrade to Progress 10.2B are the only changes.

The new 10.2B ODBC driver was installed on the "client" side right? There is no error on client side, how do they see some bits are missing? From what I saw on the JDBC driver PSC use some streaming mechanism for BLOB's, data is streamed to the client 'on-request'... i suspect some optimization is in place specificaly for clob/blob fields, that might simply be a bug... have you tried to report that to tech support?
 

jennid

Member
Yes, the new driver is installed on the client side. They determined they aren't seeing some of the bits by creating an identical transaction in the old 10.0 environment. When they read that record, they see x number of bytes. A the exact same data in the 10.2B environment is only showing about 1/4 of the bytes.

I hadn't reported it to support yet and that's the next step. I just wanted to do a little research first to make sure we aren't missing something stupid/obvious.

Thanks.
 

medu

Member
They determined they aren't seeing some of the bits by creating an identical transaction in the old 10.0 environment.

does this means that they actually try to update that blob field or simply read the same record and get different results?

anyway, this shouldn't be affected by the wide chars settings as we are talking about binary data... you should try to test one thing at the time, start with read operation and first save the blob field from old and new environment to files to check if those are the same (might be a dump/load issue); then on the client tell them to simply dump the blob field to flat files, both when reading from old and new environments... compare the files they get with what you got when reading the same records with ABL client.

if reading is ok and there is some writing activities from the other side then try the reverse, tell them to write the data they want to save in flat files and check what you actually get in the database using the ABL client to dump the blob field to flat files... compare results from both environments.

you might have to do that anyway in order to have some case for the tech support... let us know if it's a bug ;)
 
Top