Record Locking , Is This High?

FunkyColdRibena

New Member
upload_2016-1-22_14-14-13.png
So this on a database with virtually nothing active. The application developers tell me this is acceptable , but surely 37k locks (even if they are share locks ) is just wrong!
 

FunkyColdRibena

New Member
Any Share Lock is just wrong... ;)
Hey thanks for replying, I suspect its because it was coded for an older version of Progress where the default was "no_lock" if no locking parameters were specified. Do you think this a reasonable assumption?? . I have added a screen shot from promon R&D , please note the db has been up for only 73 hours but has over 9.5 billion share locks.
upload_2016-1-22_15-27-42.png
 

TheMadDBA

Active Member
I can't think of a single version of Progress that ever had NO-LOCK as the default... going back to the 1980s.

There is a startup parameter to set the default to NO-LOCK... but use great care in turning it on for compiles because it can easily break code.

Those numbers are a crime against humanity.
 

FunkyColdRibena

New Member
I can't think of a single version of Progress that ever had NO-LOCK as the default... going back to the 1980s.

There is a startup parameter to set the default to NO-LOCK... but use great care in turning it on for compiles because it can easily break code.

Those numbers are a crime against humanity.

I did feel they were a tad high!
with regards to the no_lock, share lock , I thought that if you didn't put the lock status on each query it defaults to share_lock , and that if you compile ( which I don't have the ability to do , as this is a another companies property ) with the -n parameter it changes only the queries where no type of locking has been specified. Our production machine had been up for over a year and that number was ridiculous. In my world 9.5 billion looks tame!
 

TheMadDBA

Active Member
The default has always been SHARE-LOCK... there is a -NL option you can use that changes the compile time default to NO-LOCK. If a method is specified it always uses that method regardless of the startup option.

-n is the number of users.
 

Cringer

ProgressTalk.com Moderator
Staff member
Whilst -NL exists it is no substitute for fixing the queries properly. Whilst I realise that this is an onerous task, and not one to be undertaken lightly, you will find it to be of benefit in the future I am sure.
Certainly no new query should be written in the code base that doesn't specify the lock it requires specifically.
 

TomBascom

Curmudgeon
I think this thread could be summed up with my favorite tuning advice "fix your crappy code!"

Don't go blaming it on older versions of Progress. There has never been a good excuse for writing crappy code like this. Progress has always had all of the pieces needed to write good clean locking logic. The problem has been that all too many applications, including some very famous ones, were written by people who never gave a thought to multi-user usage and who never tested such things. And then their horrible designs were copied by people with even more myopic views.
 

tamhas

ProgressTalk.com Sponsor
One of the specific practices, which I have seen commonly, and which can drive the lock count sky high is to write a query which doesn't bracket very well and thus has to read a whole lot of records before it finds one to deliver. It at least used to be the case that if that query was written with a lock, all of those read and skipped records would be locked until it found one it liked and then it released those and locked the one. The solution is to read no-lock and re-read in a different buffer with the lock.
 

Cringer

ProgressTalk.com Moderator
Staff member
Indeed Thomas. In fact reading no-lock and then finding the record with a named buffer with strong transaction scoping (do for buffername transaction: end.) is by far the safest way of ensuring your transaction scoping is what you expect it to be and avoids making a rod for your own back.
 

FunkyColdRibena

New Member
I think this thread could be summed up with my favorite tuning advice "fix your crappy code!"

Don't go blaming it on older versions of Progress. There has never been a good excuse for writing crappy code like this. Progress has always had all of the pieces needed to write good clean locking logic. The problem has been that all too many applications, including some very famous ones, were written by people who never gave a thought to multi-user usage and who never tested such things. And then their horrible designs were copied by people with even more myopic views.
Hi Tom, I wasn't blaming older versions of Progress, just trying to rationalise where all these locks could be coming from. I agree its totally code related but our application developer is unmoved. I don't develop databases just general IT guy trying to fix customer locking issues.
 

FunkyColdRibena

New Member
Hi everyone , thanks for your responses . Would there be anyway of tracking down what query was causing this. I have tried turning off all the app servers and batch processes / remote users but its still the same at 38K/ sec?
 

Cringer

ProgressTalk.com Moderator
Staff member
Can't see from a quick scan: what version of Progress are you currently running?
 

FunkyColdRibena

New Member
Can't see from a quick scan: what version of Progress are you currently running?
10.2b,

I have examined our test database and shut everything off one bit at a time, until the culprit was revealed the details of which I have now forwarded to the software developers so hopefully the query will now be re written !
Thanks for your suggestions
 

Cringer

ProgressTalk.com Moderator
Staff member
With later versions of Progress (11+ IIRC) you can use client statement caching to see exactly what a particular user is doing once you've identified one that is misbehaving on the server.
You do have proGetStack available. Progress KB - What is proGetStack?
If you have -tablerangesize and -indexrangesize set appropriately then you can use a combination of _TableStats and _UserTableStats to work out which users are locking certain resources a lot. Combine that with the above stuff and you should be able to trap the runaway processes more simply without having to kill processes off to isolate.
All that is made simple for you in proTop though as suggested by the good Doctor.
 
Top