Juniper Networks Ignite! Testing Conference. Sunnyvale California, November 9, 2011.
Overview of model-based testing. Two case studies. Thumbnail introduction to fee and free MBT tools.
Secure your environment with UiPath and CyberArk technologies - Session 1
Model-Based Testing: Why, What, How
1. Model-Based Testing:
Why, What, How
Bob Binder
System Verification Associates
Juniper Systems Testing Conference
November 9, 2011
2. Overview
• What is Model-Based Testing?
• Testing Economics
• Case Studies
– Automated Derivatives Trading
– Microsoft Protocol Interoperability
• Product Thumbnails
• Real Testers of …
• Q&A
Model-Based Testing: What, Why, How 2
3. Why?
• For Juniper:
– Reduce cost of testing
– Reduce time to market
– Reduce cost of quality
– Increase competitive advantage
• For you:
– Focus on System Under Test (SUT), not test hassles
– Engineering discipline with rigorous foundation
– Enhanced effectiveness and prestige
– Future of testing
Model-Based Testing: What, Why, How 3
5. “All Testing is Model-Based”
• Patterns for test design
– Methods
– Classes
– Package and System
Integration
– Regression
– Test Automation
– Oracles
• 35 patterns, each a
test meta-model
Model-Based Testing: What, Why, How
5
6. What is a Test Model?
TwoPlayerGame
TwoPlayerGame Mode Machine test design pattern
Two Play erG a me ( )
+TwoPlayerGame() α
p1 _S tart ( ) / p2 _S tart ( ) /
+p1_Start( ) s im u lat eV olle y( ) s im u lat eV olle y( ) ThreePlayerGame( ) /TwoPlayerGame( )
+p1_WinsVolley( ) G am e S tarte d α
-p1_AddPoint( )
+p1_IsWinner( ) p1 _W ins V olle y ( ) p2 _W ins V olle y ( )
p1_Start( ) / p3_Start( )/
+p1_IsServer( ) [th is .p 1_ Sc ore ( ) < 20 ] / [th is .p 2_ Sc ore ( ) < 20 ] / simulateVolley( ) simulateVolley( )
+p1_Points( )
th is .p 1A ddP o in t( ) th is .p 2A ddP o in t( ) Game Started
s im u lat eV olle y( ) p1 _W ins V olle y ( ) / s im u lat eV olle y( )
+p2_Start( ) s im u lat eV olle y( )
+p2_WinsVolley( ) P la ye r 1 P la ye r 2 p2_Start( ) /
-p2_AddPoint( ) S erv ed S erv ed
simulateVolley( )
+p2_IsWinner( ) p2 _W ins V olle y ( ) / p1_WinsVolley( ) /
+p2_IsServer( ) p1 _W ins V olle y ( )
s im u lat eV olle y( )
p2 _W ins V olle y ( )
simulateVolley( )
+p2_Points( ) [th is .p 1_ Sc ore ( ) = = 20] / [th is .p 2_ Sc ore ( ) = = 20] /
th is .p 1A ddP o in t( ) th is .p 1A ddP o in t( )
+~( )
p2_WinsVolley( )
P la ye r 1 P la ye r 2 p1_WinsVolley( ) [this.p2_Score( ) < 20] / p3_WinsVolley( )
p1 _Is W in ner( ) /
retu rn TR UE ;
Won Won p2 _Is W in ner( ) /
retu rn TR UE ;
[this.p1_Score( ) < 20] / this.p2AddPoint( ) [this.p3_Score( ) < 20] /
~( ) ~( ) this.p1AddPoint( ) simulateVolley( ) this.p3AddPoint( )
ω
simulateVolley( ) simulateVolley( )
p1_WinsVolley( ) / p2_WinsVolley( ) /
simulateVolley( ) simulateVolley( )
Player 1 Player 2 Player 3
Served Served Served
ThreePlayerGame p2_WinsVolley( ) / p3_WinsVolley( ) /
simulateVolley( ) simulateVolley( )
Th ree P la y erG am e ( ) / Two P la y erG am e ( )
α p1_WinsVolley( ) p3_WinsVolley( )
p 3_ S tart ( ) / [this.p1_Score( ) == 20] / [this.p3_Score( ) == 20] /
s im ulat eV o lley ( )
ThreePlayerGame p 3_ W ins V o lle y( ) /
G a m e S ta rt ed
this.p1AddPoint( ) this.p3AddPoint( )
s im ulat eV o lley ( ) p3_WinsVolley( ) /
+ThreePlayerGame() p 3_ W ins V o lle y( ) simulateVolley( )
+p3_Start( ) [t his .p 3_ S co re ( ) < 2 0] /
p2_WinsVolley( )
th is . p3 A dd P oint ( )
+p3_WinsVolley( ) Tw oP lay erG am e ( )
s im ulat eV o lley ( ) [this.p2_Score( ) == 20] /
-p3_AddPoint( ) p 1_ W ins V o lle y( ) /
+p3_IsWinner( )
s im ulat eV o lley ( ) this.p1AddPoint( )
P la y er 3
+p3_IsServer( ) S erv e d
+p3_Points( ) p 2_ W ins V o lle y( ) /
s im ulat eV o lley ( )
+~( ) p 3_ W ins V o lle y( )
[t his .p 3_ S co re ( ) = = 2 0] /
p1_IsWinner( ) / p2_IsWinner( ) / p3_IsWinner( ) /
th is . p3 A dd P oint ( ) return TRUE; Player 1 return TRUE; Player 2 Player 3 return TRUE;
Won Won Won
P la y er 3
W on p 3_ Is W in ne r( ) / ~( )
ret urn TR UE ; ~( ) ~( )
~( )
ω ω
SUT Design Model Test Model
Model-Based Testing: What, Why, How
6
7. Model-based Test Suite
1 ThreePlayerGame( )
• N+ Strategy 2
3
p1_Start( )
p2_Start( )
8 Player 2 Served
4 p3_Start( )
– Start at α 5 p1_WinsVolley( )
Player 1 Served
11 Player 3 Served
17 omega
6 p1_WinsVolley( )[this.p1_Score( ) < 20]
*7
– Follow transition 7
8
p1_WinsVolley( ) [this.p1_Score( ) == 20]
p2_WinsVolley( )
Player 1 W on
14
Player 1 W on
path 9
10
p2_WinsVolley( ) [this.p2_Score( ) < 20]
p2_WinsVolley( ) [this.p2_Score( ) == 20]
*6 Player 1 Served
2
– Stop if ω or visited *9 Player 2 Served
– Three loop
11 Player 3 Served
1 3
alpha Gam eStarted Player 2 Served 17 omega
* 10
iterations Player 2 W on
15
Player 2 W on
– Assumes state 5 Player 1 Served
4
* 12
observer Player 3 Served
17 omega
11 p3_WinsVolley( )
* 13
– Try all sneak paths
Player 3 W on
12 p3_WinsVolley( ) [this.p3_Score( ) < 20] 16
Player 3 Served Player 3 W on
13 p3_WinsVolley( ) [this.p3_Score( ) == 20] 8
Player 2 Served
14 p1_IsWinner( )
15 p2_IsWinner( )
16 p3_IsWinner( ) 5 Player 1 Served
17 ~( )
N+ Test Suite
Model-Based Testing: What, Why, How
7
8. Automated Model-based Testing
• Software that represents an SUT so that test
inputs and expected results can be computed
– Useful abstraction of SUT aspects
– Algorithmic test input generation
– Algorithmic expected result generation
– Many possible data structures and algorithms
• SUT interface for control and observation
– Abstraction critical
– Generated and/or hand-coded
Model-Based Testing: What, Why, How 8
10. Typical Test Configuration
Test Suite Control
Agent
Adapter
Adapter System
Under Test
Transport
Transport Transport
Transport
Test Suite Host OS SUT OS
Model-Based Testing: What, Why, How 10
11. Typical MBT Environment
Reqmts DB
MBT Tool
Design DB
Test Suite Control
Agent
Bug DB
Adapter
Adapter System
Under Test
Code Stack
Transport
Transport Transport
Transport
Test Suite Host
Test Manager Test Host OS SUT OS
Development Environment
Configuration Management
Model-Based Testing: What, Why, How 11
13. Show Me the Money
How much of this … for one of these?
Model-Based Testing: What, Why, How 13
14. Testing by Poking Around
Manual
“Exploratory”
Testing
System Under Test
+ No tooling costs
No testware costs
Subjective, wide variation
Low coverage
-
Quick start Not repeatable
Opportunistic Can’t scale
Qualitative feedback Inconsistent
Model-Based Testing: What, Why, How 14
15. Manual Testing
Manual
Test Input
Manual
Manual Test Results System Under Test
Test Design Evaluation
Test Setup
+ Flexible, no SUT coupling
Systematic coverage
1 test per hour
Usually not repeatable/ed -
No tooling costs Not scalable
No testware cost Inconsistent
Usage validation Tends to “sunny day” tests
Model-Based Testing: What, Why, How 15
16. Hand-coded Test Driver
Manual Test Driver
Test Design Programming System Under Test
-
10+ tests per hour Tooling costs
+ Repeatable
Predictable
Testware costs
Brittle, high maintenance cost
Consistent Short half-life
Continuous Integration, TDD Technology focus
Model-Based Testing: What, Why, How 16
17. Model-based Testing
Modeling, Automated
Automated Setup and
Generation Execution System Under Test
+ 1000+ tests per hour
Maintain model (not testware)
Tooling costs
Training costs -
Intellectual control Paradigm shift
Explore complex space Still need manual, coded tests
Consistent coverage
Model-Based Testing: What, Why, How 17
20. Real Time Derivatives Trading
• “Screen-based trading” over private network
– 3 million transactions per hour
– 15 billion dollars per day
• Six development increments
– 3 years
– 3 to 5 months per iteration
– Testing cycle shadows dev increments
• QA staff test productivity
– One test per hour
Model-Based Testing: What, Why, How 20
21. System Under Test
• Unified process
• About 90 use-cases, 650 KLOC Java
• CORBA/IDL distributed object model
• HA Sun server farm
• Multi-host Oracle DBMS
• Many interfaces
– GUI (trading floor)
– Many high speed program trading users
– Many legacy input/output
Model-Based Testing: What, Why, How 21
22. MBT: Challenges and Solutions
• One time sample not • Simulator generates fresh,
effective, but fresh test accurate sample on demand
suites too expense
• Too expensive to develop • Oracle generates expected
expected results on demand
• Too many test cases to • Comparator automates
evaluate checking
• Profile/Requirements • Incremental changes to rule
change base
• SUT interfaces change • Common agent interface
Model-Based Testing: What, Why, How 22
23. Test Input Generation
10000000
• Simulation of users 1000000
– Use case profile 100000
10000
– 50 KLOC Prolog 1000
• Load Profile 100
– Time domain variation 10
1
– Orthogonal to event 1 2 3 4 5 6 7 8 9 10 11 12
profile 3500.000
• Each generated event
3000.000
2500.000
assigned a "port" and 2000.000
Events Per Second
submit time 1500.000
1000.000
• 1,000 to 750,000 unique 500.000
tests for 4 hour session -5000
0.000
-500.000
0 5000 10000 15000 20000 25000
Time (seconds)
Model-Based Testing: What, Why, How 23
24. Automated Evaluation
• Oracle
– Processes all test inputs
– About 500 unique rules
– Generates end of session “book”
• Comparator
– Compares SUT “book” to oracle “book”
• Verification
– “Splainer” rule backtracking
– Rule/Run coverage analyzer
Model-Based Testing: What, Why, How 24
25. Test Harness
Simulator
Oracle
Adapter
Splainer
Adapter
Adapter Comparator
Adapter Test Verdict
SUT Run Reports
Model-Based Testing: What, Why, How 25
26. Technical Achievements
• AI-based user simulation generates test suites
• All inputs generated under operational profile
• High volume oracle and evaluation
• Every test run unique and realistic (about 200)
• Evaluated functionality and load response with
fresh tests
• Effective control of many different test agents
(COTS/ custom, Java/4Test/Perl/Sql/proprietary)
Model-Based Testing: What, Why, How 26
28. Results
• Revealed about 1,500 bugs over two years
– 5% showstoppers
• Five person team, huge productivity increase
– 1 TPH versus 1,800 TPH
• Achieved proven high reliability
– Last pre-release test run: 500,000 events in two hours,
no failures detected
– No production failures
• Abandoned by successor QA staff
Model-Based Testing: What, Why, How 28
30. Challenges
• Prove interoperability to Federal Judge and
court-appointed scrutineers
• Validation of documentation, not as-built
implementation
• Is each TD all a third party needs to develop:
– A client that interoperates with an existing service?
– A service that interoperates with existing clients?
• Only use over-the-wire messages
Model-Based Testing: What, Why, How 30
31. Microsoft Protocols
• Remote API for a service • All product groups
– Windows Server
– Office
– Exchange
– SQL Server
– Others
• 500+ protocols
– Remote Desktop
– Active Directory
– File System
– Security
– Many others
Model-Based Testing: What, Why, How 31
32. Microsoft Technical Document (TD)
• Publish protocols as “Technical Documents”
• One TD for each protocol
• Black-box spec – no internals
• All data and behavior specified with text
Model-Based Testing: What, Why, How 32
35. Protocol Quality Assurance Process
TD v1 TD v2 TD vn
Authors
Study Plan Design Final
• Scrutinize • Complete • Complete • Gen & Run
TD Test Rqmts Model Test Suite
Test • Define Test • High Level • Complete • Prep User
Strategy Test Plan Adapters Doc
Suite
Developers
Review Review Review Review
• TD ready? • Test Rqmts • Model Ok? • Coverage
• Strategy OK? OK? • Adapter Ok? OK?
• Plan OK? • Test Code
OK?
Reviewers
Model-Based Testing: What, Why, How 35
36. Productivity
“On average, model- Avg Hrs Per Test Requirement
based testing took 42% Task
less time than hand- Document review 1.1
coding tests” Test requirement extract 0.8
Model authoring 0.5
Threshold result Traditional test coding 0.6
• Nearly all Adapter coding 1.2
requirements had Test case execution 0.6
less than three tests Final adjustments 0.3
• Much greater gain for Total, all phases 5.1
full coverage Grieskamp et al.
Model-Based Testing: What, Why, How 36
37. Results
• Published 500+ TDs, ~150,000 test requirements
• 50,000+ bugs, most identified before tests run
• Many Plugfests, many 3rd party users
• Released high interest test suites as open source
• Met all regulator requirements, on time
– Judge closes DOJ anti-trust case May 12, 2011
• ~20 MSFT product teams now using Spec Explorer
Model-Based Testing: What, Why, How 37
38. TOOL THUMBNAILS
All product or company names mentioned herein may be trademarks or registered
trademarks of their respective owners.
39. CerifyIT
Smartesting
Model Use cases, OCL; custom test stereotypes;
keyword/action abstraction
Notation UML 2, OCL, custom stereotypes, UML Test Profile
UML Support Yes
Requirements Interface to DOORS, HP QC, others
Traceability
Generation Constraint solver selects minimal set of boundary values
Oracle Post conditions in OCL, computed result for test point
Adapter Natural language option; HP GUI drivers
Typical SUT Financial, Smart Card
Notable Top-down formally defined behavior; data stores; GUI
model
Model-Based Testing: What, Why, How 39
40. Conformiq
Designer
Model State machines with coded event/actions
Notation Statecharts, Java
UML Support Yes
Requirements Integrated requirements, traceability matrix
Traceability
Generation Graph traversal: state, transition, 2-switch
Oracle Model post conditions, any custom function
Adapter Output formatter, TTCN and user-defined
Typical SUT Telecom, embedded
Notable Timers; parallelism and concurrency; on-the-fly mode
Model-Based Testing: What, Why, How 40
41. MaTeLo
All4Tec
Model State machine with transition probabilities (Markov);
data domains, event timing
Notation Decorated State Machine
UML Support No
Requirements Integrated requirements and trace matrix; import from
Traceability DOORS, others
Generation Most likely path, user defined, all transitions, Markov
simulation; subset or full model
Oracle User conditions; Matlab and Simulink
Adapter EXAM mappers; Python output formatter
Typical SUT hardware-in-the-loop; Automotive, Rail
Notable Many standards-based device interfaces;
supports software reliability engineering
Model-Based Testing: What, Why, How 41
42. Automatic Test Generation
IBM/Rational
Model Sequence diagrams, flow charts, statecharts, codebase
Notation UML, SysML, UML Testing Profile
UML Support Yes
Requirements DOORS integration; design model traceability
Traceability
Generation Parses generated C++ to generate test cases; Reach
states, transition, operations, events for modeled classes
Oracle User code
Adapter User code, merge generation
Typical SUT Embedded
Notable Part of systems engineering tool chain
Model-Based Testing: What, Why, How 42
43. Spec Explorer
Microsoft
Model C# class with “action” method pre/post condition;
regular expressions define “machine” of classes/actions
Notation C#
UML Support Sequence diagrams
Requirements API for logging user defined requirements
Traceability
Generation For any machine, constraint solver finds feasible short or
long path of actions; generates C# runtime
Oracle Action post conditions; any custom function
Adapter User code
Typical SUT Microsoft protocols, APIs, products
Notable Pairwise data selection; on-the-fly mode; use any Dot
Net capability
Model-Based Testing: What, Why, How 43
44. T-Vec/RAVE
T-Vec
Model Boolean system with data boundaries; SCR types and
modules; hierarchic modules
Notation SCR-based, tabular definition; accepts Simulink
UML Support No
Requirements RAVE requirements management, interface to DOORS,
Traceability others
Generation Constraint solver identifies test points
Oracle Solves constraints for expected value
Adapter Output formatter; html, C++, java, perl, others
Typical SUT Aerospace, DoD
Notable Simulink for input, oracle, model checking; MCDC model
coverage; non-linear and real-valued constraints
Model-Based Testing: What, Why, How 44
45. Close Cousins
• Data Generators
– Grammar based
– Pairwise, combinatoric
– Fuzzers
• TTCN-3 Compilers
• Load Generators
• Model Checkers
• Model-driven Development tool chains
Model-Based Testing: What, Why, How 45
47. MBT User Survey
• Part of 1st Model-based Testing User
Conference
– Offered to many other tester communities
• In progress
• Preliminary analysis of responses to date
• https://www.surveymonkey.com/s/JSJVDJW
Model-Based Testing: What, Why, How 47
48. MBT Users, SUT Domain
Gaming
Social Media
Other
Supercomputing
Communications
Software Infrastructure
Embedded
Transaction Processing
0% 5% 10% 15% 20% 25% 30% 35% 40%
Model-Based Testing: What, Why, How 48
50. MBT Users, Software Process
Other
Ad Hoc
Waterfall
Spiral
Incremental
XP/TDD
CMMI level 2+
Agile
0% 5% 10% 15% 20% 25%
Model-Based Testing: What, Why, How 50
51. How Used?
What stage of adoption? Who is the tool provider?
Evaluation
In House
Pilot Project
Open Source
Rollout
Routine use Commercial
0% 20% 40% 60% 0% 20% 40% 60% 80%
Model-Based Testing: What, Why, How 51
52. What is the Overall MBT Role?
At what scope is MBT used? What is overall test effort for
each testing mode?
System Manual
Component Hand-coded
Unit Model-based
0% 20% 40% 60% 80% 25% 30% 35% 40%
Model-Based Testing: What, Why, How 52
53. How Long to be Proficient?
Median: 100 hours
Hours of training/use to
become proficient
160+
80-120
1-40
0% 10% 20% 30% 40% 50%
Model-Based Testing: What, Why, How 53
54. How Bad are Common Problems?
Misses bugs
Cant integrate w other test assets
Developing SUT interfaces too hard
Inadequate coverage
Developing test models is too difficult
Oracle ineffective
Too difficult to update model
Model "blows up"
0% 50% 100%
Worse than expected Not an issue Better than expected
Model-Based Testing: What, Why, How 54
55. MBT Effect on Time, Cost, Quality?
Percent change
40%
35% from baseline: e.g.,
35% 36%
35% fewer escaped
30% 28%
bugs, 0% more bugs
25% 23%
20%
18%
15%
10%
5% 0%
0% Better
Worse
Bugs Escaped
Overall Testing
Costs Overall Testing
Time
Model-Based Testing: What, Why, How 55
56. MBT Traction
Overall, how effective is MBT? How likely are you to continue
using MBT?
Not at all 0%
No effect 4%
Slightly 4%
Slightly 13%
Moderately 21%
42% 38%
Moderately
Very
38%
Extremely 42% Extremely
Model-Based Testing: What, Why, How 56
58. What Have We Learned?
• Test engineering with rigorous foundation
• Global best practice
• Broad applicability
• Mature commercial offerings
• Many proof points
• Commitment and planning necessary
• 10x to 1,000x improvement possible
Model-Based Testing: What, Why, How 58
59. Q&A
rvbinder@gmail.com
Model-Based Testing: What, Why, How 59