Presented at Agile in Business conference at Bangalore. This presentation focuses on code metrics that can be used as lead indicators to effectively control and predict the application quality
4. Genrich Altshuller scanned 400,000 patent
descriptions, and found that only 2% were really
new. This means, 98% of all new problems can
be solved using previous experience/learning(s).
4
6. “. . . .Yo u r
hands can’t
Structural
hit what
application profile
Code
structure
Test case
adequacy your eyes
can’t
s e e … . .”
Requirements
structure
Muhammad Ali
6
7. Typical Lifecycle in Collabera
System
Requirem Architectu Design & Unit
/Functiona
ents re Coding Testing
l Testing
Requirements &
Continuous Integration & Iteration
Architecture Baseline
Releases
7
10. Can ‘defect prone’ tendency be isolated?
• A study on eclipse (published in PROMISE 2007)
– Defects are mapped to less than 15% of files
• Study on Firefox
– Security issues are mapped a low % of code
(predominantly java script interpreter)
• Publication from CAST
– In a US based bank around 30% defects in tests
are attributed to identifiable poor code structures
10
11. The problem of plenty…
Cyclomatic % comment Lines Density of comment Violations
complexity lines
%Branch Java NCSS Public undocumented Complexity
API distribution by
Statements
method
Depth of Npath Complexity Duplicated lines
Inheritance Duplicated blocks Complexity
Calls/method Class Fan Out distribution by class
Complexity Number of children
Methods / Class
Afferent couplings
Class Data Abstraction File dependencies to
Class Coupling
coupling Lack of cohesion cut
Maintainability methods
Index Boolean Expression
Package dependencies
complexity
to cut
11
12. Influence ofSIT with engineering metrics
Correlating complexity attribute II
% Branches Block Depth
% Branches Block Depth
Cyclomatic Complexity
Max Cyclomatic Complexity
12
Bridging the eagle’s eye and worm’s view Slide 12
13. Composite parameter analysis
Study from European university
Number of Class vs SIT defects
2% 1%
1%
4%
Key parameters measured
Cyclomatic complexity/LOC
No. of methods/class
No. of Calls/method
92% LOC/method
8% of classes is contributing to 100% of SIT
defects
13
14. Toxicity Analysis
Correlates to
AT & SIT
defects
Detailed study of the data shows most of the SIT & AT defects are occurring amongst the top four Java classes
shown in the sample data above which have highly toxic code with high method length
14
19. Shifting to the better – Org level
18
16
14
12
10
Dec-10
8
Dec-11
6
4
2
0
Max Code Methods per Statement Branches % Max Block
Complexity Class per Method Depth
19
23. Qualitative Benefits
• Objective measures; language of developer
• Improve ability to isolate and deal with the
defective ones
• Improved risk management and transparency
23
24. Thank
you
Mosesraj R - (mosesrajr@collabera.com)
24
Editor's Notes
Timely, cost effective, objective
Can tere better way, for eg: can our requirements show its indwelling charetecters? Can we figure out a way to bering it oiut. Can I have a structural view of it, which brings out its charecterricts. Eg in use case, in code – knowing complexity etc?Architecture is abt decision making. Can there be an indictator for our decision making?This is where we are heading to.JIDOKA in s/w as we see it is controlling the indwelling factors. We cant obsviouslycontroll the outdwellers. Somewhere the industry is unhealthily skwed towards customer sats without controlling the indwelling factors. Our egrs,BA, archs are confident in their own way, but not able to pass the maturity around. Customer apprecuation creates gaga, customer mute, then no recognition,!!Extend of measurement indicates our maturity
Put Note for abbrevsChallenge dealing with SIT, importance of way we chose SIT. Industry benchmark. Defects coming in system test phase; clear cause of matrurity issue. Inability to find defects early-> is the way we are doing engg process. First we thougt coding has clue, then we figured, requiremsts has the cluie. Hence way we strctureengg process properly.
92% of classes were within those values. It was within those other 8, we found this.Study from a university: tried the similar exercise internally. Each class was one data point, tried to map to system testing defects.
Which were those 8, what were those that contributed to SIT defects
http://www.eclipsezone.com/articles/pmd/http://pmd.sourceforge.net/pmd-report.htmlhttp://checkstyle.sourceforge.net/config_metrics.htmlhttp://docs.codehaus.org/display/SONAR/Java+Metric+Definitionshttp://www.cs.umd.edu/~jfoster/papers/issre04.pdfhttp://findbugs.cs.umd.edu/papers/MoreNullPointerBugs07.pdfhttp://findbugs.cs.umd.edu/papers/FindBugsExperiences07.pdf<< Knock off PMD and find bugs>><<check hampurapo<<change :: Baselined set of metrics list, use of techniques previously, next box for tools, deploymente- with eclipse and continous integration with build tools like hudsonetc