Resolved Strange behaviour leading to Error# 11353

rzr

Member
OE10.1C / Windows 7.

One of my team mate ran into error# 11353 while updating a field of a maintenance table we have.

The total length of the fields in an index exceeds max key size. Index <index-name> of table <table-name> (129) (11353)

here is the code:

Code:
FOR EACH Table EXCLUSIVE-LOCK
     WHERE Table.Field1 = "xxx"
        AND Table.Field2 = "yyy" :
        ASSIGN Table.Field3 = Table.Field3 + ",value1,value2".
END.

Additional Information:
1. There are only two records in Table that match the WHERE clause.
2. The total length of fields in index is less than 1970 character (including value1 & value2)
3. Index is on Field1 + Field2 + Field3.

I found this issue strange because I would expect the FOR EACH to execute only twice as there were only two records that matched the WHERE clause. But instead I found that the FOR EACH kept iterating on the "first" record and kept updating the Field3 value until it ran into 11353 error.

I am not able to figure out why the FOR EACH will hold onto the first record and keep iterating on it :(:(
 

Cringer

ProgressTalk.com Moderator
Staff member
Does it do the same if you define a named buffer for the table and do the for each on the unnamed buffer with NO-LOCK, and then find the record to update on the named buffer EXCLUSIVE-LOCK to update? I wonder if it's to do with the fact you are updating an index value within the same transaction the query is on?
 

oli

Member
I guess that the reason lies in the fact that, as mentioned, the index is on Field1, Field2 and Field3.
Since you update Field3 inside the loop, you get this strange behavior.

Try any of the following:
- Specify the USE-INDEX option with an index that does not involve Field3;
- Replace the FOR EACH by:
Code:
  REPEAT PRESELECT EACH ...:
    FIND NEXT ...
 

TomBascom

Curmudgeon
Are you under the impression that:

2. The total length of fields in index is less than 1970 character (including value1 & value2)

Somehow makes the error message ("The total length of the fields in an index exceeds max key size.") misleading or invalid?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Another possibility: is this a DB created in a pre-10.1B release, which doesn't have large index key support enabled? It may be that the limit actually in effect is the old limit of 192 bytes. You can check it with a proutil describe.
 

rzr

Member
1. Cringer & Oli - I will try your recommendations
2. Tom - When I add up the field values they don't exceed the limit. But when the FOR EACH executes it keeps iterating only on the "first" records and blows up the limit leading to error# 11353.
3. Rob - This Db does not have large index key support enabled.
 

rzr

Member
The below code worked. Thanks all for suggestions. But I'm still not convinced as to why the FOR EACH would keep looping on first record.

Code:
DEFINE BUFFER RZR FOR Table.
 
FOR EACH Table NO-LOCK
  WHERE Table.Field1 = 'XXX'
    AND Table.Field2 = 'YYY' :
   
    DO FOR RZR :
 
        FIND RZR WHERE ROWID(RZR) = ROWID(Table) EXCLUSIVE-LOCK NO-WAIT NO-ERROR.
        IF AVAIL RZR THEN
            ASSIGN RZR.Field3 = RZR.Field3 + ",Value1,Value2".
    END.
 
END.
 

rzr

Member
from the error message I got it said the limit was 1970 characters. We are on 10.1B ( not 10.1C are reported earlier).
I do not know if the database was created in 10.1B or it was a pre 10.1B database that was converted using conv910 utilities.

When I open up data dictionary and go the Index box, at the bottom it says that large keys support is not enabled.
 

TomBascom

Curmudgeon
You are misinformed. The key length limit (without large keys) is less than 200. It varies a bit based on number of fields and other overhead so 180 or so is a good working value for the usual limit.

I think you should pay more attention to the error message.
 

rzr

Member
Thanks Tom ! I stand corrected.

This is the error is got. I think I picked up the number 1970 when checking for 11353. When I re-read this article now, I see that 1970 is the limit with large keys enabled.
11353_2.jpg
11353.jpg
 

Kalan

Member
EVER INCREASING INDEX CURSOR TRAP. Beware of incrementing fields which are indexed at an unbracketed level in the index you are using. This can result in continuous rereading and reincrementing of the same records which will continue until the database crashes because you filled up the ai and bi filesystem. If you are modifying indexed fields in a query consider carefully whether a reread could occur.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I gave this a try (10.2b07)...

Start with a sports DB. Add this incremental:
Code:
ADD INDEX "city-state" ON "Customer"
  AREA "Customer Index Area"
  INDEX-FIELD "City" ASCENDING
  INDEX-FIELD "State" ASCENDING
  INDEX-FIELD "Country" ASCENDING

.
PSC
cpstream=ISO8859-1
.
0000000166

Then show the records to be updated, in their initial state:
Code:
for each customer where city = "Boston" and state = "MA":
  display city state country.
end.

City         State                Country  
──────────── ──────────────────── ────────────────────
Boston       MA                   USA  
Boston       MA                   USA  
Boston       MA                   USA

Now update them:
Code:
for each customer exclusive where city = "Boston" and state = "MA"
  use-index city-state:
  assign customer.country = customer.country + ",a,b".
end.

This throws an error:
Code:
The total length of the fields in an index exceeds max key size. Index city-state of table sp.Customer (129) (11353)

Now display the country values:
Code:
Country   
────────────────────────────────────────────────────────────────────────────────
USA,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,
USA,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,
USA,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,a,b,

It didn't just iterate repeatedly on the first record. It iterated on the result set. In each record the length of the country value was 1959 bytes (sports has large index keys enabled).

So I didn't get the same outcome as rzr, where only the first record was updated. But nonetheless it is a behaviour to be aware of.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Another takeaway from this is that maximum key size is a bit of a red herring in this case. You should have large keys enabled in your database; I don't see any reason not to. But with the original code you'll see the error either way because the code will just keep looping until the keys reach their maximum length, whatever it may be.
 

rzr

Member
Thanks Rob !

So is this a good "expected" behavior or should I log this as a bug with support?
I know we have a workaround as posted in #7.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
So is this a good "expected" behavior or should I log this as a bug with support?

That's a question for PSC, or an experienced developer. I'm not qualified to say. But I'll give you my guess.

An index key entry doesn't get updated when the key value changes. It is deleted and a new entry is created. In this case you haven't specified an index for the compiler to use, and the one it has selected is being "updated" by your code (i.e. getting new entries) with each iteration of your FOR loop. Because each new entry matches your WHERE clause criteria the code never gets to the end of the index bracket. So my guess is this behaviour is by design, not a bug.
 

Cringer

ProgressTalk.com Moderator
Staff member
It's always been the case that updating records within a FOR EACH can have some strange effects if you're updating index fields. As far as I know, using Rob's example above, if you change the for each back to no-lock and then use a named buffer that you find exclusive-lock for the update (thus narrowing the transaction scope) that it all works as expected. Not had the chance to test Rob's particular example, but I've always been taught that sort of approach to avoid unexpected behaviour. The other one to watch out for is when using BREAK BY, deleting records within the query has some wonderful effects :D
Ultimately I don't think it's a bug. I think it's bad coding ;)
 

rzr

Member
Thanks all for the suggestions and advice. Like I mentioned we have fixed this issue with strong scoping the buffer.

Rob - thanks for the information on the index entry update/delete. It is much clear now.
 
Top