2. The Cyclomatic complexity metric
• theoretical foundation
• the theoretical boundaries
• what it means
• how it's used
• unit testing
• integration testing
• Static and dynamic cyclomatic complexity
3. Desired Metric Properties
• Universal – all algorithms
• Mathematically rigorous
• Objective – two people get same metric
• Operational – what do you do with it
• Visualization -- could see it, intuitive
4. A complexity measure
• Definition: The cyclomatic number
v(G) of a graph G with e edges, n
nodes, and p connected
components is e-n+p.
• Theorem: In a strongly connected
graph the cyclomatic number is
equal to the maximal number of
linearly independent circuits.
6. Pick paths for the 5 circuits
• Using theorem 1 we choose a set of
circuits that are paths. The following set
B is a basis set of paths.
• B: (abef) , (abebef) , (abeabef) , (acf) ,
(adcf)
• Linear combinations of paths in B will
generate any path. For example,
• (abea(be)3f) = 2(abebef) – (abef)
• (a(be)2abef) = (a(be)2f)+(abeabef)-(abef)
7. 18 times
n It is IMPOSSIBLE to test All Statistical Paths
n So, from structural coverage view,
When Should We Stop Testing?
n Not care? (planes do, missiles do)
n All lines?
n All branches?
n All boolean outcomes (MC/DC)?
n All cyclomatic paths?
18
0 10Optimum
Number of Tests?
All Statistical Paths = 10
18
If allow 1 Nanosecond per test, time is :
T =
10 x 3600 x 24 x 365
9
10 18
= 31.7 Years
IV. Testing Software for Reliability - Structured Testing
Impossibility of Testing Everything
8. Methods of computing v(G)
• E-n+2
• # decisions + 1
• # of regions in a planar flow graph
9. 9
If .. then If .. then .. else If .. and .. then If .. or .. then
Do .. While While .. Do Switch
Structured
ProgrammingComponents
10. Definition of essential complexity
ev(G)
• Remove all structured components from G to
produce a reduced graph R
• Definition: ev(G) = v(R)
• 1<=ev(G)<=v(G)
11. 11
Function A
Complexity = 20
5.17s
Unstructuredness (evg) = 1
Function B
Complexity = 18
Unstructuredness (evg) = 17
So B is MUCH harder to maintain!
Complexity (decisions) & # lines here are similar, but ...
12. What it means
• In theory --- v, ev,
• In practice --- unconscious, v,ev
– How it works when ignored
– v=20, 45 tests, 12 independent paths tested
• In practice ---- conscious, v, ev
– With just limiting complexity
– With limiting and testing to complexity
• Used just statically, e, ev
• Used dynamically, e,ev
13. Boundaries
• When structured testing is ‘proof of correctness’
– seldom
– When the path functions are ‘linear’
– Eg, the run time of a redundant path equals it’s linear combination of basis path
runtimes
• When structured testing misses errors
• Often, the assumption is that a path is validly tested
• When data singularities exist along a path
• When an interrupt suspension can cause an error
• When power of observation fails
– High v(G)
– High ev(G)
• Want to test to v independent paths
– Or can reduce v, dead decisions
• High ev is insanity, ev can go high with one bad change
14. Google book search – McCabe Complexity – 826 books
Encyclopedia of Microcomputers
Introduction to the Team Software Process
]
Network Analysis, Architecture and Design
Separating Data from Instructions: Investigating a New Programming
15. Cyclomatic Complexity
• Static
– Unit paths v(G)
– Integration paths
– Data complexity
• Dynamic
– Untested paths v(G)-ac
– Untested integration
– Untested data complexity
16. Applications
Testing
Essential complexity --- reverse
engineering
Design Complexity ---- system
integration
Data complexity --- security testing
Pareto Distribution of errors ---
reverse engineering
Code Breaker ---- Reuse
Security threats
Data Slicing ----- Reuse
Code walkthroughs ---- validation
Design validation
Requirements Validation
17. Software testing
• cyclomatic complexity
• Cost of errors
• Black box, grey box, white box testing
• unit pretesting, the baseline method
• unit testing, static and dynamic
• integration pretesting, testing
• Code attribute by complexity
• testing life cycle
• Regression testing
18. Costs of errors
• 80% of your project will be re-work
• Coding error cost is X
• Integration error cost is 30X
• Acceptance error is 300x
• Operational error can be 3000X
• Bottom line: catch errors early and prevent
them!!!
18
19. 19
What module coverage level will you attempt? Minimum # tests would be:
Line Cov. = 2, Branch Cov. = 3, Cyclomatic Path Cov. = 4
Levels of Code Coverage - Module
IV. Testing Software for Reliability - Structured Testing
20. 20
Cyclomatic Complexity = v(g) = 10
Means that 10 Minimum Tests will:
Cover All the Code
Test Decision Logic Multiple Ways From
Baseline
Structured Testing --- The baseline method
Run a robust functional path as ‘the baseline’; flip
successive decisions and come back to the baseline
21. Structured Testing --- dynamic
• Actual complexity(ac) is the # of linearly
independent paths in the test set
• May have 100 tests with v=10 and ac=5
• 1<=ac<=v
• Many organizations run functional tests first,
then add v-ac additional independent paths
22. 22
• Design Complexity, S0 = iv(g)
– measure of decision structure which controls the invocation
of modules within the design; quantifies testing effort for
paths within modules relative to calls in the design
• Integration Complexity, S1 = S0 - n + 1
– measure of the integration tests that qualify the design tree;
it is the # of paths needed to test all calls to all modules at
least once, a quantification of a basis set of integration tests;
it measures the minimum integration testing effort of a
design; each S1 test validates the integration of several
modules and is known as a subtree of the whole design tree
• Sum/Maximum/Average of v(g), ev(g), gdv(g), branches, etc.
III. Development of Reliable Software - Code Structure
Complexity Metrics Impacting Reliability - Component
23. 23
• Module Design Complexity, iv(g)
[integration]
– measure of logical paths containing
calls/invocation of immediate subordinate
modules, i.e., Integration
– calculated by reduction of graph after
removing decisions/nodes with no calls
– value should be higher for modules higher in
hierarchies, that is, in management modules
• Paths Containing Global Data, gdv(g)
– global data also impacts reliability, because it
creates relationships between modules
• Branches, which is related to v(g)
• Maximum Nesting Level
• Others
v(g) = 10
ev(g) = 3
iv(g) = 3
III. Development of Reliable Software - Code Structure
Complexity Metrics Impacting Reliability -
Module
24. 24
Integration
Complexity
(S1)
How do you assess complexity & test effort
for component design?
S1 = S0 - n + 1= 12 - 7 + 1 = 6
By measuring the integration paths in its
modules
III. Development of Reliable Software - Code Structure
25. 25
What coverage level will you attempt
during Integration Test?
• Module entry? - the 5 red boxes
need tested
• Module integration calls? - the 9
black lines need tested (they include
the red boxes)
• Module integration paths? - the 9
black lines plus all paths in the
calling modules that contain calls;
Structured Testing recommends this
level as minimum
• Other?
IV. Testing Software for Reliability - Structured Testing
Levels of Code Coverage - Component
Black lines mean not yet tested
26. 26
• Cyclomatic Complexity & Reliability Risk
– 1 – 10 Simple procedure, little risk
– 11- 20 More Complex, moderate risk
– 21 – 50 Complex , high risk
– >50 Untestable, VERY HIGH RISK
• Cyclomatic Complexity & Bad Fix Probability
– 1 – 10 5%
– 20 –30 20%
– > 50 40%
– Approaching 100 60%
• Essential Complexity (Unstructuredness) &
Maintainability (future Reliability) Risk
– 1 – 4 Structured, little risk
– > 4 Unstructured, High Risk
Complexity Metrics Impacting Reliability - Module
27. 27
Inspection self test ---- error prevention – test before building
Error prevention is the most powerful tool
28. Regression Testing
• You test once, retest a million times
• Test data, results, and traceability are very valuable
• Data base system holds tests
• Comparator used to verify results
• Synced with configuration management
• Is pruned
• Is added to
• 20% of tests will show 80% of errors --- percolated up
• Tests have history --- those that show no errors are
pruned out
28
29. Review teams ---- Here’s the leverage
• Egoless --- no blame no shame
• Find errors don’t fix them
• Team works as a whole --- succeeds or fails as one
team
• If # of errors is high then rework the product and
inspect again
• Need a scribe
• Need a checklist of issues to look for
• Product gets certified as ‘Inspected by Team Us’
• Is preceded by test team producing acceptance tests
• Use McCabe methodology for code and design reviews
29