5. IN-CONFIDENCE
The British Standard definition of negative testing in BS7925-1 defines
Negative testing is a term defined site-by-site and sometime team-by-team. Other
definition includes tests that aim to exercise the functionality that’s deals with
Input validation, rejection and re-requesting functionality
Coping with absent, slow or broken external resources
Error-handling functionality i.e. messaging, logging, monitoring
Recovery functionality i.e. fail-over, rollback and restoration
Negative Testing is “Testing aimed at showing software does not work”
• Discovery of faults that results in significant failures, corruptions and security breaches
• Observations and measurement of a system’s response to external problems
• Exposure of software weakness and potential for exploitation
6. IN-CONFIDENCE
Negative Tests are classified in below 2 categories :
Tests designed to make the system fail
and tests that are designed to exercise functionality that deals with failure.
There are often further aims of negative testing that do not set the scope of the
activity
Prompt exposure of significant faults
Learning about function through the study of dysfunction
Verification (and possible enhancement) of risk model used to prioritise testing
Documentation of common failures, characteristic symptoms, and running fixes
7. IN-CONFIDENCE
Positive Vs Negative Test
Positive Negative
Provides a level of confidence that a system
works
Seeks to show that software does not work
Determines that your application works as
expected. If an error is encountered during
positive testing, the test fails.
Ensures that your application can gracefully
handle invalid input or unexpected user
behaviour or system conditions
Testing the system using valid data Testing the system using Invalid data
9. IN-CONFIDENCE
Scripted tests and up-front investment
Test Scripting
Review
Test Data Prep
Test Logistics
Primary negative testing
Test execution of the test from formal test design
Secondary negative testing
Unplanned negative / Exploratory testing – Can not be planned in advance
Project with fixed budget can’t have these
Budgeting and estimation
10. IN-CONFIDENCE
Negative test does not have a well –defined position in waterfall or iterative
processes
Activities
Test Analysis
Test design
Test monitoring and control
Scheduling and staffing
Exclusion
From a particular test phase
Sound assessment of the risk backed by re-assignment of the task
Planning
11. IN-CONFIDENCE
Negative testing can be done at all level of testing
Phase Approach Staffing
Requirement Analysis /
System Design /Unit Test
Requirement Analysis to
derive negative tests, Formal
Techniques , Exception
handling , validation
functionality in the code
Designer ,
Coders ,
Test Analysts
System / Integration Test Execution of Scripted ,
Primary and Secondary test ,
Access risk and potential for
exploitation
Testers ,
Automation ,
Experienced testers
UAT / Beta Test User facing errors , Use cases
, Mis-use cases , Failure Mode
, Performance limits , Fail-
over , Recovery tests
UAT Testers (Customer users)
, Performance testers
15. IN-CONFIDENCE
Political support
People outside the test team did not like the idea of spending time on
deliberately break the system
Need political support if it is to remain effective and valuable
Test Managers well versed in providing this kind of support. Tips: Avoid the
term “ Negative test “
What ever the approach , it’s imp that business understands the value of
information produced
Important for testers to understand the value that business places on the
information
19. IN-CONFIDENCE
Explicit Restrictions
Implicit Restrictions
Constraints
Other Test Types
Stress Testing
Scalability Testing
Reliability Testing
Test against known constraints
20. IN-CONFIDENCE
Testing concurrent use of resources
can be very fruitful way to discover bugs :
Database
Files
Connections
Hardware
More than 2 requests , queuing, timeouts
and Deadlocks
Concurrency
21. IN-CONFIDENCE
Happy Go Path
Some general mis-use cases for a GUI or browser :
Field entry: varying order from usual sequence on form
Re-entry following rejection of input
Choosing to change existing details
Use of ‘cancel’
Use following ‘undo’
On browsers, navigating using the ‘back’ button, bookmarks etc.
Starting sequence halfway
Dropping out of sequence / interaction without completion/logout
Use cases and mis-use case
23. IN-CONFIDENCE
Test Selection Strategy guide team in deciding which test to plan and perform,
which one first ?
Choose tests that allow broad and reliable observations of the system
Test effectiveness and robustness of exceptional handling scenarios early
Use a broad set of negative tests to observe the system from different prospective
Priorities testing based on known exploitation which can lead to a crash
Test non happy go paths
24. IN-CONFIDENCE
Negative tests are often simple to write, but hard to execute
Applies particularly to failure modes, concurrency, and exception handling
This need not be a problem; the tests have great value as thought
experiments for designers and exploratory testers
Ensure that such tests have sufficient time and tool support
Fault injection tools can make impossible tests practical, and monitoring
tools allow problems to be studied ‘in the wild’ i.e. during a volume test
Simpler to write than to perform
26. IN-CONFIDENCE
Recognize and exploit weaknesses
Observation Skills
Knowledge of underlying technology
Build model for failure
State Transition – Unexpected states and incomplete transaction
Bug Cluster
Finding faults without doing more testing
Intuitive testers : Bloodhounds , Breakers and Bug hunters
Tools Use
27. IN-CONFIDENCE
When to stop
If deadline or budget have some flexibility
Stop testing when you are not finding new significant issues
Use requirement based or functionality based measurement to
determine how much of the system has been observed under test
Projects with firm deadlines
Information collected can be used to prioritise test and justify
any extension may required
Information can be given greater value and impact by aiming
the test at an assessment of failure mode and verification of risk
model
Number of negative tests can be limitless – Negative test is an
open-ended
Highest risk first
Use formal techniques to derive exceptional handling tests
28. IN-CONFIDENCE
Populating required fields
Correspondence between data and field types
Field size test
Allowed data bounds and limits
Reasonable data
Embedded Single Quote
Date bound test
Web session test
Examples
32. IN-CONFIDENCE
Negative testing is a core skill of experienced testers and required an
opportunistic approach to get the best value from the time spent
Negative testing can not only find the significant failures, but can produce
invaluable strategic information about the risk model underlying testing and
provide overall confidence in the overall quality of system.
Negative testing is an open –ended and hard to plan granularly. It needs to be
managed proactively rather than over planed.
Although negative testing is powerful and effective approach, it is also hard to
manage task that has the potential to produce unwelcome information.
Attempts to ignore or exclude negative testing may need to be robustly opposed.
Notas del editor
purpose of the test team is to show, to a level of confidence, that a system works. Negative testing does the opposite; it seeks to show that software is not working, to dig and probe through the external weaknesses until it finds just the right way of making a bad situation worse, to hurt the system and watch it heal, or die. The two approaches are complementary, but have entirely different aims.
overall aims and management of negative testing, and describes a variety of techniques used to select, derive and execute negative tests
Testing as a whole is reactive, open-ended, and hard to value before it has been done. Much of the blame for this can be laid at the door of negative testing – which is why, although it is an integral part of many different approaches, it is sometimes explicitly excluded from the scope of testing.
Formal techniques for test derivation can be used for negative testing, and are effective during design and unit testing. However, in later phases, negative testing is reactive, open-ended (see box), and can be hard to execute (see box). The reductionist approach of most formal techniques can result in a large number of time-consuming tests, with no real idea of coverage. A significant proportion of negative testing in these late stages will be semi-scripted or exploratory
Negative testing is not a test design technique but rather a approach or test classification. It is possible to use many formal test design techniques to derive tests that can be classified as “ Negative test” .
Test selection strategies guide the team in deciding which tests to plan and perform, and which to do first.
Negative testing is open ended, and selection strategies need to deal with the way that one test may reveal a new set of more important tests.
Any test has the potential to expose a weakness during execution. It is important that testers can recognise and exploit these weaknesses – either by adapting the current test, or with a newly designed test later in the process. Designing negative tests to exploit these observed weaknesses is, in essence, exploratory testing. This section covers ways that testers can more easily observe weaknesses, and find effective exploitations.
Any test has the potential to expose a weakness during execution. It is important that testers can recognise and exploit these weaknesses – either by adapting the current test, or with a newly designed test later in the process.
Negative testing comes in for a lot of flack. It is questioned, excluded and occasionally ridiculed. This attention and frequent self-justification means that it is often well aimed, and an effective way of finding good bugs.