source code control system

j4n

Member
g'day

is there anybody who uses svn or git as sccs?
if you do - how do you do impact analysis (changing an include has to trigger the compilation of all the files that use that include)?
i'd be delighted if anyone would share some useful hooks or at least give me some ideas of how things could be done.

cheers
 

sdjensen

Member
We use CVS as source code control.

The propath is mapped to "n" for your local files and "m" for global files.

ex: propath = "n:\base,n:\includes,m:\base,m:\include"
this means that if you are working on a file it is located in n:\base and in m:\base but because n:\base is first in line Progress will use that file.

As a result of this, we do not run compiled files, but run them 'on-the-fly'.
 

j4n

Member
well - the files on oehive.org do it the other way around. as far as i can see they integrate svn functions to the progress editor(s). thats not my problem.

what i need is a an application that generates a list of files that have to be compiled when i update an include they all use. im not even sure if thats the right way to do it. how do you guys pack up patches that you send to the customer? or do you always send a lib with every file of your app?
 

joey.jeremiah

ProgressTalk Moderator
Staff member
you mean code impact.


for example:

if you change an include file or table definition etc.

which files will be impacted and may have to be changed and compiled.


you need a code parser that can get that information from your sources.

and an application that stores and uses it.


OpenEdge Architect does that

or you can probably build something that uses XREF.
 

j4n

Member
i've used rtb in the past. imho is the philosophy behind it a bit too old fashioned. i dont want to lock files before i change them.

and to your first reply: i was looking for someone who has already done that (impact analysis, automatically compiling changed programs and so on...). i still cant believe that i am the first developer on the planet with this problem...
 

sdjensen

Member
Compile all files and check if size has changed. If changed then send the file to the client. Or you can match on hash key value.

I don't think you can find a problem that support include-file in include-file in include-file etc...
 

TomBascom

Curmudgeon
... i was looking for someone who has already done that (impact analysis, automatically compiling changed programs and so on...). i still cant believe that i am the first developer on the planet with this problem...

I did that once upon a time.

It seemed like a good idea but the effort simply wasn't worth it. It's much simpler to just recompile everything. (Yes, even when there are 10s of thousands of programs.)

But if you still want advanced analysis tools for Progress code you should check out http://www.joanju.com/.
 

j4n

Member
thats not gonna work for the current project. we have several thousand source files and a full compile takes some hours. the developer needs an 'immediate' response when he checks in a file.
the business process would be:


  • developer checks in new sources
  • system checks dependencies and creates an impact list (all files that need to be recompiled)
  • system compiles all file from this list
  • if there is a compile error system sends message to developer and rejects the commit; if everything is ok system copies/moves/whatever compiled files to certain patch directory
thats what i want to have in a nut shell.
 

TomBascom

Curmudgeon
If a full compile of just a few thousand files takes several hours you're doing something wrong ;)

None the less, to do what you want to do is the problem at hand -- and to do that I would start by looking at the joanju tools.
 

tamhas

ProgressTalk.com Sponsor
Or http://www.oehive.org/taxonomy/term/131 ... but there isn't really an impact analysis tool there yet ... I and 10,000 other people have done XREF databases, but none of them has ever been published and widely adopted, in part because there are often glitchy little fiddles specific to the code base, e.g., plucking off the first line in a Varnet application to get the program description. What I'd like to do is something along these lines http://www.oehive.org/node/1112
 

Casper

ProgressTalk.com Moderator
Staff member
  • developer checks in new sources
  • system checks dependencies and creates an impact list (all files that need to be recompiled)
  • system compiles all file from this list
  • if there is a compile error system sends message to developer and rejects the commit; if everything is ok system copies/moves/whatever compiled files to certain patch directory

before you do a check in you work locally with the sources. So you test there.
Setup the development system in such a way that you xref all files and store the information in some tables. Query these tables to find the sources who have included the changed include.
Use something like ant to automatically compile the specific sources locally. If that succeeds, then commit the source.

The last point you make makes it more difficult because in case an error occurs and you don't commit then you have to replace the just correctly compiled r-code files with a previous version. And I dont think you have the r-code files in the repository (well at least we haven't).

We do a nightly build of the development system. Nobody is allowed to commit to the repsitory if the compilation fails. Any breach of tis rule is found out the next day. If one watches this carefully then you will notice that this mechanism will work since it is easy to find the one who committed the wrong source. It is clear that if a programmer persists in committing bad source code to the repository that he has a problem.

Casper.
 

j4n

Member
before you do a check in you work locally with the sources. So you test there.

You're right. I just thought it'd be easier for me (as programmer) to let a system do all that for me. I start to see that this philosophy brought the company to where we are at the moment (sccs wise).


The last point you make makes it more difficult because in case an error occurs and you don't commit then you have to replace the just correctly compiled r-code files with a previous version. And I dont think you have the r-code files in the repository (well at least we haven't).

Simply put our file system looks like this:
Application
|_base
|_patch 1
|_patch 2
|_patch n

base contains a lib with everything
each patch directory contains r-files with changes

I want that every night (or other time intervall) something from subversion gets compiled and save into the corresponding patch or base directory so other departements (support, test) can use them.

Maybe my head is far too deep in our current structure. Maybe all this doesn't make any sense at all. I'd appreciate any thoughts on this. Maybe you (Casper) could give me a deeper look inside your developement environment.

cheers
 
Top