Más contenido relacionado La actualidad más candente (19) Similar a Design for Testability: A Tutorial for Devs and Testers (20) Design for Testability: A Tutorial for Devs and Testers2. Peter Zimmerer
Siemens AG
Peter Zimmerer is a principal engineer at Siemens AG, Corporate
Technology, in Munich, Germany. For more than twenty years Peter has
been working in the field of software testing and quality engineering. He
performs consulting, coaching, and training on test management and test
engineering practices in real-world projects, driving research and
innovation in this area. An ISTQB® Certified Tester Full Advanced Level,
Peter is a member of the German Testing Board, has authored several
journal and conference contributions, and is a frequent speaker at
international conferences. Contact Peter at peter.zimmerer@siemens.com.
3. Siemens Corporate Technology | November 2013
Design for Testability:
A Tutorial for Devs and Testers
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Contents
‡ What is testability all about?
‡ What are the influencing factors and constraints of testability?
‡ Why is testability so important?
‡ Who is responsible and who must be involved to realize design for
testability?
‡ How can we improve, foster, and sustain the testability of a system?
‡ How can we innovate and renovate design for testability in our
projects?
Æ A new, comprehensive strategy on
design for testability over the whole lifecycle of a system
Page 2
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
4. A story on missing testability …
The Therac-25 was a radiation therapy machine
produced by Atomic Energy of Canada Limited
(AECL) after the Therac-6 and Therac-20 units.
It was involved in at least six accidents between
1985 and 1987, in which patients were given
massive overdoses of radiation, approximately
100 times the intended dose. These accidents
highlighted the dangers of software control of
safety-critical systems, and they have become
a standard case study in health informatics and
software engineering.
A commission concluded that the primary reason should be attributed to the bad
software design and development practices, and not explicitly to several coding
errors that were found. In particular, the software was designed so that it
was realistically impossible to test it in a clean automated way.
http://en.wikipedia.org/wiki/Therac-25
Page 3
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Testability – Theoretical definitions
Testability is the ease of validation, that the software meets the requirements. Testability is broken
down into: accountability, accessibility, communicativeness, self-descriptiveness, structuredness.
Barry W. Boehm: Software Engineering Economics, Prentice Hall,1981
Testability is the degree to which, and ease with which, software can be effectively tested.
Donald G. Firesmith: Testing Object-Oriented Software, Proceedings of the Object EXPO Europe, London (UK), July 1993
The degree to which a system or component facilitates the establishment of test criteria and the
performance of tests to determine whether those criteria have been met.
The degree to which a requirement is stated in terms that permit establishment of test criteria and
performance of tests to determine whether those criteria have been met.
IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries. New York, NY, 1990
The testability of a program P is the probability that if P contains faults, P will fail under test.
Dick Hamlet, Jeffrey Voas: Faults on its Sleeve. Amplifying Software Reliability Testing, in Proceedings of the 1993 International
Symposium on Software Testing and Analysis (ISSTA), Cambridge, Massachusetts, USA, June 28–30, 1993
Software testability refers to the ease with which software can be made to demonstrate its faults
through (typically execution-based) testing. Len Bass, Paul Clement, Rick Kazmann: Software Architecture in Practice
The degree to which an objective and feasible test can be designed to determine whether a
requirement is met. ISO/IEC 12207
The capability of the software product to enable modified software to be validated. ISO/IEC 9126-1
Degree of effectiveness and efficiency with which test criteria can be established for a system,
product or component and tests can be performed to determine whether those criteria have been
met. ISO/IEC 25010, adapted from ISO/IEC/IEEE 24765
Page 4
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
5. Testability – Factors (1)
Control(lability)
‡ The better we can control the system (in isolation),
the more and better testing can be done, automated, and optimized
‡ Ability to apply and control the inputs to the system under test
or place it in specified states (for example reset to start state, roll back)
‡ Interaction with the system under test (SUT) through control points
Control(lability)
Visibility / observability
Visibility / observability
‡ What you see is what can be tested
‡ Ability to observe the inputs, outputs, states, internals, error conditions,
resource utilization, and other side effects of the system under test
‡ Interaction with the system under test (SUT) through observation points
Page 5
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Testability – Factors (2)
Availability, operability
‡ The better it works, the more efficiently it can be tested
‡ Bugs add overhead for analysis and reporting to testing
‡ No bugs block the execution of the tests
Simplicity, consistency, and decomposability
‡
‡
‡
‡
The less there is to test, the more quickly we can test it
Standards, (code / design) guidelines, naming conventions
Layering, modularization, isolation, loose coupling, separation of concern, SOLID
Internal software quality (architecture, design, code), technical debt
Stability
‡ The fewer the changes, the fewer the disruptions to testing
‡ Changes are infrequent, controlled and do not invalidate existing tests
‡ Software recovers well from failures
Understandability, knowledge (of expected results)
‡ The more (and better) information we have, the smarter we will test
‡ Design is well understood, good and accurate technical documentation
@Microsoft: SOCK – Simplicity, Observability, Control, Knowledge
Page 6
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
6. Testability – Factors (3)
Visibility /
Observability
Control(lability)
Answer
Question
Availability
Operability
Stability
Oracle
Simplicity
Understandability
Knowledge
Page 7
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Testability – Constraints
How we design and build the system affects testability
Testability is the key to cost-effective test automation
Testability is often a better investment than automation
‡ Test environment: stubs, mocks, fakes, dummies, spies
‡ Test oracle: assertions (design by contract)
Æ Implemented either inside the SUT or inside the test automation
Challenges for testability
‡ Size, complexity, structure, variants, 3rd party components, legacy code
‡ Regulatory aspects
‡ You cannot log every interface (huge amount of data)
‡ Conflicts with other non-functional requirements like encapsulation,
performance or security, and impacts system behavior
‡ Non-determinism: concurrency, threading, race conditions, timeouts,
message latency, shared and unprotected data
Page 8
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
7. Testability is affected / required by new technologies
Example: Cloud computing
Cloud testability – the negative
– Information hiding and remoteness
– Complexity and statefulness
– Autonomy and adaptiveness
– Dependability and performance
– Paradigm infancy (temporary)
Cloud testability – the positive
+ Computational power
+ Storage
+ Virtualization
Testing requires lots of resources
and the cloud is certainly powerful enough to handle it
ndle
Reference: Tariq M. King, Annaji S. Ganti, David Froslie:
Towards Improving the Testability of Cloud Application Services, 2012
P
Page 9
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Design for Testability (DfT) – Why?
Testability § How easy / effective / efficient / expensive is it to test?
Can it be tested at all?
Increase depth and quality of tests – Reduce cost, effort, time of tests
‡ More bugs detected (earlier) and better root cause analysis with less effort
‡ Support testing of error handling and error conditions, for example to test how a
component reacts to corrupted data
‡ Better and more realistic estimation of testing efforts
‡ Testing of non-functional requirements, test automation, regression testing
‡ Cloud testing, testing in production (TiP)
Reduce cost, effort, and time for debugging, diagnosis, maintenance
‡ Typically underestimated by managers, not easy to measure honestly
Provide building blocks for self-diagnosing / self-correcting software
Possible savings ~10% of total development budget
(Stefan Jungmayr, http://www.testbarkeit.de/)
Simplify, accelerate, and power up the testing
Focus the testing on the real stuff … and have more fun in testing …
Page 10
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
8. Educate stakeholders on benefit of DfT
Design for Testability (DfT) – Who?
Joint venture between architects (+ developers) and testers
‡ Collaboration between different stakeholders
‡ Requires accountability and clear ownership
Testability must be built-in by architects (+ developers)
‡ Pro-active strategy in tactical design:
design the system with testability as a key design criterion
‡ Testers (test automation engineers) must define testability requirements
to enable effective and efficient test automation
Contractual agreement between design and test
Balancing production code and test code
‡ Production code and test code must fit together
‡ Manage the design of production code and test code
as codependent assets
Page 11
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Educate stakeholders on benefit of DfT
Design for Testability (DfT) – How?
Suitable testing architecture, good design principles
ples
Interaction with the system under test through
well-defined control points and observation points
nts
Additional (scriptable) interfaces, ports, hooks, mocks, interceptors
for testing purposes (setup, configuration, simulation, recovery)
Coding guidelines, naming conventions
Internal software quality (architecture, code)
Control(lability)
Visibility / observability
Built-in self-test (BIST), built-in test (BIT)
Consistency checks (assertions, design by contract, deviations)
Logging and tracing (AOP, counters, monitoring, probing, profiling)
Diagnosis and dump utilities, black box (internal states, resource
utilization, anomalies at runtime)
Think test-first (xTDD): how can I test this?
Page 12
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
9. Architectural and design patterns (1)
Tester
Use layered architectures, reduce number of dependencies
Use good design principles
‡ Strong cohesion, loose coupling, separation of concerns
SUT
Real or simulated
environment
Component orientation and adapters ease integration testing
‡ Components are the units of testing
‡ Component interface methods are subject to black-box testing
Avoid / resolve cyclic dependencies between components
‡ Combine or split components
ble
‡ Dependency inversion via callback interface (observer-observable
design pattern)
Example: MSDN Views Testability guidance
The purpose is to provide information about how to increase the
testability surface of Web applications using the Model-View-Presenter
(M-V-P) pattern. http://msdn.microsoft.com/en-us/library/cc304742.aspx
Page 13
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Architectural and design patterns (2)
Provide appropriate test hooks and factor your design in a way that
lets test code interrogate and control the running system
‡ Isolate and encapsulate dependencies on the external environment
‡ Use patterns like dependency injection, interceptors, introspective
‡ Use configurable factories to retrieve service providers
‡ Declare and pass along parameters instead of hardwire references to
service providers
‡ Declare interfaces that can be implemented by test classes
‡ Declare methods as overridable by test methods
‡ Avoid references to literal values
‡ Shorten lengthy methods by making calls to replaceable helper methods
Promote repeatable, reproducible behavior
‡ Provide utilities to support deterministic behavior (e.g. use seed values)
Æ That’s just good design practice!
Page 14
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
10. Design for Testability (DfT) patterns
Dependency injection
The client provides the depended-on object
to the SUT
Dependency lookup
The SUT asks another object to return the
depended-on object before it uses it
Humble object
We extract the logic into a separate easy-to-test component that is
decoupled from its environment
Test hook
We modify the SUT to behave differently during the test
Reference: Gerard Meszaros: xUnit Test Patterns: Refactoring Test Code, Addison-Wesley, 2007
http://xunitpatterns.com/
Page 15
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Use principles and practices from
clean code developer (CCD)
http://www.clean-code-developer.de/
http://www.clean-code-developer.com/
Page 16
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
11. Interfaces can act as
ƒ control points
Attributes of testability interfaces
ƒ observation points
Indirect Input
Product API
System
under
Test (SUT)
Product Events
Test API
Dependent
Components
Indirect Output
Test Events
‡
‡
‡
‡
‡
‡
‡
Lightweight vs. intrusive
‡ Manual vs. automatable
Realistic vs. artificial
‡ Tester owned vs. developer owned
Control-only vs. visibility-only vs. both
Information-only vs. knowledge
Debug-only (logs / asserts) vs. release build
Internal (embedded) vs. external vs. shipping (documented) feature
NFR (e.g. security) harmless vs. NFR conflict (e.g. security hole)
Page 17
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Logging and tracing
Check the architecture and design.
Check the dynamic behavior
(i.e. communication) of the system.
Detect bugs, including sporadic bugs
like race conditions.
Distributed System
...
Event
Analyze execution, data flow, and system state.
Follow error propagation and reproduce faults.
For this, tracing is essential.
Tracing is important for unit testing,
integration testing, system testing,
system monitoring, diagnosis, …
...
Event
Event Collection
Trace
visualization
Trace
Trace
analysis
Relationship between
trace extraction, trace visualization,
and trace analysis
Start early with a logging and tracing concept!
Page 18
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
12. Further benefits of
good logging and tracing practices
Avoid analysis paralysis – a situation where the test team spends
as much time investigating test failures as they do testing
Support failure matching and (automated) failure analysis
Avoid duplicates in the bug database by using rules of analysis
when deciding if a failure has been previously observed
Backbone of reliable failure analysis
Example: Windows Error Reporting (WER)
‡ Small, well defined set of parameters
‡ Automated classification of issues
‡ Similar crash reports are grouped together
ÆReduced amount of data collected from customers:
bucketing
Page 19
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Diagnosis and dump utilities, black box
(internal states, resource utilization, anomalies at runtime)
Detect, identify, analyze, isolate, diagnose, and reproduce
a bug (location, factors, triggers, constraints, root cause)
in different contexts as effective and as efficient as possible
‡ Unit / component testing:
bug in my unit/component or external unit/component or environment?
‡ Integration testing of several subsystems:
bug in my subsystem or external subsystem or environment?
‡ System of systems testing:
bug in which part of the whole system of systems?
‡ Product line testing in product line engineering (PLE):
bug in domain engineering part or application engineering part?
‡ Bug in system under test (SUT) or test system?
Æ quality of test system is important as well ...
‡ SUT in operation:
maintenance, serviceability at the customer site
Page 20
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
13. Educate stakeholders on benefit of DfT
Strategy for Design for Testability (DfT)
over the lifecycle (1)
Requ
DfT
Arch
‡ Consistently defined and well-known in the project to drive a common
understanding, awareness, and collaboration with clear accountability
‡ Addressed and specified as one important non-functional requirement
from the beginning
‡ Investigated by elicitation techniques (interviews, observation,
brainstorming, story writing, prototyping, personas, reuse, apprenticing)
‡ Included and elaborated in the product backlog
‡ Defined, specified, traced, and updated as a risk item in the risk-based
testing strategy with well-defined mitigation tasks
‡ Balanced with regards to conflicting non-functional requirements
‡ Implemented dependent on the specific domain, system, and
(software) technologies that are available and used
Page 21
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Strategy for Design for Testability (DfT)
over the lifecycle (2)
Requ
u
Educate stakeholders on benefit of DfT
Test
Test
DfT
Arch
‡ Specified and elaborated in a testability guideline for architects,
developers, testers (Æ DfT – How?)
Æ Testability view*/profile in the architectural design approach (
(ADA)
)
‡ Description how to recognize characteristics of the ADA in an artifact
to support an analysis and check of the claimed ADA
‡ A set of controls (controllables) associated with using the ADA
‡ A set of observables associated with using the ADA
‡ A set of testing firewalls within the architecture used for regression testing
(firewalls enclose the set of parts that must be retested, they are imaginary
boundaries that limits retesting of impacted parts for modified software)
‡ A set of preventive principles (e.g. memory / time partitioning):
specific system behavior and properties guaranteed by design (e.g. scheduling,
interrupt handling, concurrency, exception handling, error management)
‡ A fault model for the ADA: failures that can occur, failures that cannot occur
‡ Analysis (architecture-based or code-based) corresponding to the fault model
to detect ADA-related faults in the system (e.g. model checking)
‡ Tests made redundant or de-prioritized based on fault model and analysis
*Additional view to the 4+1 View Model of Software Architecture by Philippe Kruchten
Page 22
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
14. Educate stakeholders on benefit of DfT
Strategy for Design for Testability (DfT)
over the lifecycle (3)
Requ
u
Test
DfT
Arch
‡ Included as one important criteria in milestones and quality gates, e.g.:
Æ Check control and observation points in the architecture
Æ Check testability requirements for sustaining test automation
Æ Check logging and tracing concept to provide valuable information
Æ Check testability support in coding guidelines
Æ Check testability needs (especially visibility) for regression testing
‡ Investigated and explored in static testing:
architecture interrogation (interviews, interactive workshops),
architecture reviews, QAW, ATAM*, code reviews
‡ Investigated and explored in dynamic testing:
special testability tour in scenario testing (application touring)
Æ Neglecting testability means increasing technical debt … $$ …
*Architecture-centric methods from SEI, see http://www.sei.cmu.edu/
Page 23
November 12, 2013
Corporate Technology
Summary – Testability and
Design for Testability (DfT)
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved
Test
Requ
u
DfT
Arch
‡ What?
‡ Why?
‡ Who?
‡ How?
Æ Strategy for Design for Testability (DfT) over the lifecycle
Page 24
November 12, 2013
Corporate Technology
Peter Zimmerer
Unrestricted © Siemens AG 2013. All rights reserved