SlideShare a Scribd company logo
1 of 74
Download to read offline
MN
PM Tutorial
9/30/2013 1:00:00 PM

"Essential Test Management
and Planning"
Presented by:
Rick Craig
Software Quality Engineering

Brought to you by:

340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Rick Craig
Software Quality Engineering
A consultant, lecturer, author, and test manager, Rick Craig has led numerous teams of testers
on both large and small projects. In his twenty-five years of consulting worldwide, Rick has
advised and supported a diverse group of organizations on many testing and test management
issues. From large insurance providers and telecommunications companies to smaller software
services companies, he has mentored senior software managers and helped test teams
improve their effectiveness.
© 2013 SQE Training V3.1

1
© 2013 SQE Training V3.1

4
© 2013 SQE Training V3.1

5
© 2013 SQE Training V3.1

6
© 2013 SQE Training V3.1

7
The IEEE has two definitions for “Quality”:
The degree to which a system, component, or process meets specified
requirements
The degree to which a system, component, or process meets customer or
user needs or expectations
The ISO (ISO 8402) defines “Quality” as:
The totality of features and characteristics of a product or service that bears
on its ability to meet stated or implied needs
Philip B. Crosby defines “Quality” as:
Conformance to requirements. Requirements must be clearly stated.
Measurements determine conformance
nonconformance detected is the
absence of quality.

© 2013 SQE Training V3.1

8
Testing is the process of measuring quality.
Testing is a lifecycle process, not just a phase of the Software Development Life Cycle
(SDLC), which occurs after the completion of coding.
The IEEE has two definitions for “testing”:
The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect
of the system or component
The process of analyzing a software item to detect the difference between
existing and required conditions (i.e., bugs) and to evaluate the features of
the software items
Unfortunately, implied requirements are very easy to get wrong. Often due to political
reasons, requirements rarely have “bugs.” When a requirement is deemed to be
incorrect, an “enhancement request” is typically raised rather than an incident/defect
report. Similarly, in the case of third-party development, the difference between a
defect and an enhancement may be a legal issue.

© 2013 SQE Training V3.1

9
© 2013 SQE Training V3.1

10
© 2013 SQE Training V3.1

11
For testers to be effective, they have to work closely with the developers. Adopting a
“them and us” attitude typically results in the software product being delivered to
testing much later in the lifecycle and/or not meeting basic entrance criteria.

© 2013 SQE Training V3.1

12
“The defect that is prevented doesn’t need repair, examination, or explanation. The
first step is to examine and adopt the attitude of defect prevention. This attitude is
called, symbolically, zero defects.”
— Philip Crosby: Quality is Free (1979)

Production bugs cost many times more than bugs discovered earlier in the lifecycle. In
some systems the factor may be 10, while in others it may be 1,000 or more. A
landmark study done by TRW, IBM, and Rockwell showed that a requirements bug
found in production cost on average 100+ times more than one discovered at the
beginning of the lifecycle.

© 2013 SQE Training V3.1

13
Testing (at least in this course!) is not about perfection, only about reasonable risk.
The granularity required is both a business and technical issue.

© 2013 SQE Training V3.1

14
© 2013 SQE Training V3.1

15
A methodology (or method) is a process model composed of tasks, work products,
and roles for consistently and cost effectively achieving specified objectives.
Methodologies should be considered as dynamic guidelines that help the software
engineers do their jobs. Methodologies should be periodically reviewed and updated
based on the experiences of the development and testing staff. Inflexible
methodologies can lead to a disgruntled staff and complicate buy-in.

© 2013 SQE Training V3.1

16
STEPTM is a testing methodology based on the IEEE guidelines. STEPTM treats testing
as a lifecycle of activities that occurs in parallel with the software development
lifecycle (SDLC).
Most testing is preventive testing and is divided into levels. A level is characterized by
the environment in which the testing occurs. The components of the test environment
include
• Who is doing the testing
• Hardware
• Software
• Data
• Interfaces
• etc.
FYI: The “Acquire” step can mean reusing existing testware or developing new test

cases.
TIP: If your organization does not have a formal methodology in place, choose a pilot

project to develop a Master Test Plan. Use this test plan as the basis of your
methodology and then incrementally build upon the initial outline until you have
developed a comprehensive customized methodology.

© 2013 SQE Training V3.1

17
© 2013 SQE Training V3.1

18
The Master Test Plan (MTP) should outline how many levels are going to be used and
how they are dependent upon each other.

© 2013 SQE Training V3.1

19
A good testing methodology should embrace all of the points listed above.

© 2013 SQE Training V3.1

20
The software lifecycle is a series of imperfect transformations.

© 2013 SQE Training V3.1

21
© 2013 SQE Training V3.1

22
From the FDA’s point of view: This is true if testing is regarded as a separate phase
conducted at the end of a traditional waterfall development cycle.
From SQE’s point of view: This is true when testing is involved throughout the
development lifecycle.
From our point of view: “Basically, no amount of testing at the end of the project will
make bad software good.”

© 2013 SQE Training V3.1

23
© 2013 SQE Training V3.1

24
A level is defined by the collection of hardware, software, documentation, people, and
processes that make up a specific testing effort.
A test manager may be responsible for a single level or potentially all of the levels
specified in the project’s Master Test Plan.

© 2013 SQE Training V3.1

25
Unit, Integration, System, and Acceptance are the names used by the IEEE for the
levels (stages) of test planning. Many other terms also are used to describe these
levels.
NOTE: Some methods, processes, and terminology use the term “stage” instead of

“level”.

© 2013 SQE Training V3.1

26
How many levels is the right number?
Too many – consumes too many resources and often extends a development
cycle
Too few – too many defects may slip through
Wrong ones – consumes resources and allows too many defects to slip
through
Although there is no “golden rule,” most projects use between three and five levels.
Smaller projects may use only one level; large, life-dependent systems may have
many more.
FYI: The IEEE defines four levels of testing:

Acceptance
System
Integration
Unit

© 2013 SQE Training V3.1

27
Acceptance Testing (the “glue”):
A set of tests that, when successfully executed, certify a system meets the
user’s expectations
Based on the requirements specifications (high-level tests)
Often written by the end user/client (can be a problem in a Web environment
or in shrink-wrapped software that will be used by millions of unknown users)
Ideally built before a single line of code is developed
Developed by or approved by the user representative prior to software
development
Sample test cases serve as models of the requirements
The acceptance test set serves as a model of the system
Changes, if necessary, must be negotiated – should use very formal
configuration management process
Ideally, should be short in duration compared to other levels of testing
May require significant resources to find/build realistic test data

© 2013 SQE Training V3.1

28
Typically the most extensive and time consuming level of testing. It should be as
comprehensive as time and resources allow. Acceptance testing is often a subset of
system testing, but the biggest difference is who does the testing.
System testing considerations:
Corrections to defects found
New code integration
Devices and supporting equipment
Files and data
Large number of test cases:
Hundreds and even thousands not uncommon
Starts with functional testing
Includes test cases intended to create failures
Includes test cases designed to stress and even break the system
Focus on reliability and operations:
Will the system support operational use?
Security, backup, recovery etc.
A by-product of the systems test should be the regression test set. A key deliverable,
the regression test set is typically a subset of the system’s test cases and should be
saved for testing future modifications.
TIP: Remember that requirements can be wrong.

© 2013 SQE Training V3.1

29
A major project development decision that impacts testing is “who owns the interface.”
In other words, is the module “caller” or “callee” responsible for ensuring the interface
works? If changes to the interface need to be made, who has final say as to what
those changes are and when they are implemented?
Integration testing is difficult to stage manage. Strategies include
Top levels working down
Critical software first
Bottom levels working up
Functional capabilities
Build levels
Prototypes
FYI: Integration testing may be referred to as “string,” “thread,” or “build” testing. It is
often conducted in “stages” by the same or different groups of testers.

Example integration exit criteria:
Integration test cases are documented in accordance with corporate
standards
All test cases are run; X% must pass
No class 1 or 2 defects
X% statement coverage
Must pass the “smoke” test

© 2013 SQE Training V3.1

30
Unit testing is the validation of a program module independent from any other portion
of the system. The unit test is the initial test of a module. It demonstrates that the
module is both functionally and technically sound and is ready to be used as a building
block for the application. It is often accomplished with the aid of stub and driver
modules which simulate the activity of related modules.
Unit testing is typically a development responsibility, but testing must help. The testing
team can provide help and guidance in any of the following ways:
Determining the purpose of the testing activity and why it is difficult
Analyzing programs to identify test cases
Defining what is good and bad testing
Explaining how to create test case specifications
Defining test execution and evaluation procedures
Itemizing what records and documentation to retain
Discussing the importance of re-testing and the concept of the test data set
TIP: Although management support is key, inspections, walkthroughs, and code

reviews typically are more beneficial if management is not present during the actual
review.
FYI: Inspections tend to be more formal than walkthroughs and therefore typically

require more training for the participants.

© 2013 SQE Training V3.1

31
© 2013 SQE Training V3.1

32
The easiest way to organize the testing effort and recognize the many planning risks
and their associated contingencies (and thereby reduce the projects overall risk) is to
use a Master Test Plan (MTP). The test manager should think of the Master Test Plan
as one of his or her major communication channels with all project participants. A
Master Test Plan ties together all the separate levels into a single cohesive effort.

© 2013 SQE Training V3.1

33
The written test plan should be a by-product of the test process.
The IEEE defines a (master) test plan as:
A document describing the scope, approach, resources, and schedule of intended
testing activities. It identifies the test items, the features to be tested, the testing tasks,
who will do each task, and any risks requiring contingency planning.
The Master Test Plan is obviously a document, but more importantly it is a thought
process. It is a way to get involvement (and have buy-in) from all parties on how
testing will occur. If a Master Test Plan is created and no one uses it, did it really help?
The creation of the Master Test Plan should generally start as early as possible,
ideally in the early stages of project development and/or requirements formulation.

© 2013 SQE Training V3.1

34
Obviously, the first question you must ask yourself when creating a test plan is “Who is
my audience?” The audience for a unit test plan is quite different from the audience for
an acceptance test plan or a Master Test Plan—so the wording, use of acronyms,
technical terms, and jargons should be adjusted accordingly.
Keep in mind that various audiences have different tolerances for what they will and
will not read. Executives may not be willing to read an entire master test plan if it is
fifty pages long, so you may have to consider an executive summary. Come to think
about it, you might want to avoid making the plan prohibitively long or no one will read
(or use) it. If your plan is too long, it may be necessary to break it into several plans of
reduced scope (possibly based around subsystems or functionality). Sometimes, the
size of plans can be kept in check by the judicious use of references. But please
proceed carefully—most people don’t really want to gather a stack of documents just
so they can read a single plan.
The audience of a Master Test Plan usually includes developers, testers, users, the
project sponsor, and other stakeholders.
Often, the author of a Master Test Plan will be the manager of the test group (if one
exists), but it also could be the project manager (the MTP should ultimately form part
of the project plan) or the user’s technical representative.

© 2013 SQE Training V3.1

35
This is the outline of the Master Test Plan template as defined in the IEEE 829-2008
standard.
The IEEE templates should be thought of as guidelines only. Feel free to change,
add, delete sections as you see fit. The template on the next page is the one I usually
use. It combines most of the things found in the template on this page with some of
the sections the IEEE 829-2008 only includes in the level-specific test plan template.

© 2013 SQE Training V3.1

36
What is it?
A document (or series of documents) that is outlined during project planning and is
expanded and reviewed during a project to guide and control all testing efforts within
the project.
Why have it? It is the primary means by which the test manager exerts influence by:
Raising testing issues
Defining testing work
Coordinating the work of others
Gaining management approval
Controlling what happens
Note that item 6 “Software Risks” and item 7 “Planning Risks and Contingencies”
appear as a single section in the IEEE template.
TIP: A table of contents (TOC), glossary, and index make good additions to the IEEE
standard test plan. Risks and contingencies are often restricted to just planning risks
and contingencies. Some organizations have a section called “Assumptions.”
Assumptions that do not occur are really planning risks. The IEEE template should be
considered only a guide. Sections should be changed, added, or deleted to meet your
organization’s objectives. In some cases, the plan may only be a checklist or even
verbal.
The above outline is derived from the IEEE 829.

© 2013 SQE Training V3.1

37
1 ― Test Plan Identifier
In order to keep track of the most current version of your test plan, you will want to
assign it an identifying number. If you have a standard documentation control system
in your organization, then assigning numbers is second nature to you.
TIP: When auditing the testing practices of an organization, always check for the test

plan identifier. If there isn’t one, that usually means that the plan was created but
never changed (and quite probably never used). The MTP should itself also be the
subject of configuration management.
2 ― Introduction
The introduction should at least cover:
A basic description of the project or release including key features, history,
etc. (scope of the project)
An introduction to the plan that describes the scope of the plan (what levels,
etc.)

© 2013 SQE Training V3.1

38
3 ― Test Items
This section describes programmatically what is to be tested. If this is a master test
plan, this section might talk in very broad terms: “version 2.2 of the accounting
software,” “version 1.2 of the users manual,” or “version 4.5 of the requirements spec.”
If this is an integration or unit test plan, this section might actually list the programs to
be tested, if known. This section should usually be completed in collaboration with the
configuration or library manager.
FYI: Many MTPs refer to a particular internal “build” of an application rather than the

public version number.

© 2013 SQE Training V3.1

39
4 ― Features to be Tested
This is a listing of what will be tested from the user or customer point of view (as
opposed to test items, which are a measure of what to test from the viewpoint of the
developer or library manager). For example, if you were system testing an Automated
Teller Machine (ATM), features to be tested might include:
Password validation
Withdraw money
Deposit checks
Transfer funds
Balance inquiries, etc.
NOTE: The features to be tested might be much more detailed for lower levels of test.

5 ― Features Not to Be Tested
This section is used to record any features that will not be tested and why. There are
many reasons that a particular feature might not be tested (e.g., it wasn’t changed, it
is not yet available for use, it has a good track record, etc.). Whatever the reason a
feature is listed in this section, it all boils down to relatively low risk. Even features that
are to be shipped but not yet “turned on” and available for use pose at least a certain
degree of risk, especially if no testing is done on them. This section will certainly raise
a few eyebrows among managers and users (many of whom cannot imagine
consciously deciding not to test a feature), so be careful to document the reason you
decided not to test a particular feature.

© 2013 SQE Training V3.1

40
6 ― Risk Analysis
This session breaks risk analysis into two sections:
Software or Product Risks
Project or Planning Risks

Note: The ISTQB uses the words Product and Project Risk rather than the terms
Software and Planning Risks.

© 2013 SQE Training V3.1

42
The purpose of discussing software risk is to determine what the primary focus of
testing should be. Generally speaking, most organizations find that their resources are
inadequate to test everything in a given release. Outlining software risks helps the
testers prioritize what to test and allows them to concentrate on those areas that are
likely to fail or have a large impact on the customer if they do fail. Organizations that
work on safety-critical software usually can use the information from their safety and
hazard analysis here. However in many other companies no attempt is made to
verbalize software risks in any fashion. If your company does not currently do any type
of risk analysis, try a brainstorming session among a small group of users, developers,
and testers to identify their concerns.
The outcome of the software risk analysis should directly impact what you test and in
what order you test. Risk analysis is hard, especially the first time you try it, but you
will get better at it—and it’s definitely worth the effort. Often, it’s a lot more important
what you test than how much you test.

© 2013 SQE Training V3.1

43
Step 1 – Make an inventory of the system's features and attributes.
The level of detail of the inventory is based upon the resources available for the risk
assessment and the detail of the test (i.e., system test is more detailed than
acceptance test). All features/attributes do not necessarily have to be at the same
level of detail.
FYI: A feature is a user function; an attribute is a system characteristic.

© 2013 SQE Training V3.1

44
Step 2 – Determine the likelihood of the feature or attribute failing.
Once the inventory has been built, the next step is to assign a “likelihood of something
going wrong” to each of the features and attributes identified in the inventory (this is
often achieved by conducting a “brainstorming” session). While some organizations
like to use percentages, number of days/years between occurrences, or even
probability “half lives,” often using a set of simple categories such as the ones listed in
the slide above provide sufficient accuracy.
If the likelihood of something going wrong is none or zero, then this item may be
removed from the analysis. However, the removal should be documented.
Step 3 – Determine the impact on the business (not just the IT department) if the
feature or attribute were to fail.
If the impact of the feature or attribute failing is trivial (or even beneficial), then this
item may be removed from the analysis. Again, the removal should be documented.
NOTE: While testers, developers, and customer support representatives may have the

best “gut feel” for determining which features or attributes are most likely to fail, it is
often the line of business (LOB) managers who typically have the best handle on how
big a business impact a failure could cause.

© 2013 SQE Training V3.1

45
Step 4 – Determine the “1st cut” testing priority by multiplying the likelihood and
business impact.
Multiplying the likelihood and the impact will determine which items have the highest
risk. This information then can be used to determine which test cases should be given
the highest priority/extensiveness.
From the ISTQB Syllabus:
Risk can be quantified mathematically when the probability of the occurrence of the
risk (P) and the corresponding damage (D) can be quantitatively represented. The risk
is calculated from the formula P*D. In most cases the probability and damage cannot
be quantified, rather only the tendencies are assignable (e.g., high probability, low
probability, higher damage, average damage, etc.)
The risk is defined as a graduation within a number of classes or categories.
If there are no dependable metrics available, then the analysis is based on personal
perceptions, and the results differ, depending on the person making the judgment. For
example, the project manager, developer, tester and users all may have different
perceptions of risk.
The degree of insecurity should be recognizable from the results of the risk analysis,
which was used to evaluate the risk.

© 2013 SQE Training V3.1

46
Web Site Attribute
Spelling mistakes
Invalid mail-to
Viruses received via email
Wrong telephone #s
Slow performance
Poor usability
Ugly site
Does not work with Browser X
Hacker spam attack
Site intrusion

© 2013 SQE Training V3.1

Business Impact
Low (projects bad image)
Medium (loss of business)
Medium (lost time)
High (loss of business)
High (loss of business)
Medium (some loss of business)
Medium (projects bad image)
High (loss of business)
Medium (server temporarily down)
High (unknown)

47
Once the items have been prioritized, they can be sorted. Sorting the list of features
and attributes provides a clear view of which items need the most attention.
TIP: Consider entering the data into a software tool that is “sort friendly” (e.g., use

Excel instead of Word).

© 2013 SQE Training V3.1

48
If time or resources are an issue, then the priority associated with each feature or
attribute can be used to determine which test cases should be created and/or run.
TIP: Used wisely, a prioritized inventory with a “cut off” point can be powerful when

negotiating with senior management.
In addition to using the Risk Analysis to determine Test Case/Run priority, the Risk
Analysis can be used as a starting point for identifying failure points and subsequently
designing test cases to specifically exercise the suspected failure points. This
technique often is used by organizations with extremely low risk tolerances (e.g.,
medical device manufacturers, the military, and space agencies).

© 2013 SQE Training V3.1

49
7 – Planning Risks and Contingencies
Planning risk can be anything that adversely affects the planned testing effort
(schedule, completeness, quality, etc.)
The ISTQB refers to these as project risks.

© 2013 SQE Training V3.1

50
The purpose of identifying planning risks is to allow contingency plans to be developed
ahead of time and ready for implementation in case the event
occurs.
Examples of Planning Risks:
Risk:
Contingency:
Prerequisites:

Risk:
Contingency:

Risk:
Contingency:

© 2013 SQE Training V3.1

Project start time is slightly delayed, but the delivery date has
not changed
Staff works overtime
Overtime is approved by senior management, and staff have
stated willingness to work overtime
Microsoft releases a new version of browser halfway through
testing (and the delivery date has not changed)
Don’t run some of the lower priority test cases for the Web site
and re-run the standard smoke test with the new browser
Entire testing staff wins state lottery
Make sure you are in the syndicate

51
There are many contingencies to consider, but in most cases they will all fall into one
of the categories shown above. For example, reducing testing or development time is
the same as reducing quality, while increasing resources could include users,
developers, contractors, or just overtime, etc.
Many organizations have made a big show of announcing their commitment to quality
with quality circles, quality management, total quality management (TQM), etc.
Unfortunately, in the software world many of these same organizations have
demonstrated that their only true commitment is to the schedule.
Many software projects have schedules that are at best ambitious and at worst
impossible. Once an implementation date is set, it is often considered sacred.
Customers may have been promised a product on a certain date; management
credibility is on the line; corporate reputation is at stake; or the competitors may be
breathing down a company’s neck. At the same time, an organization may have
stretched its resources to the limit. It is not the purpose of this course to address the
many reasons why test managers so often find themselves in this unenviable spot but
to discuss what you can do about it.

© 2013 SQE Training V3.1

52
8 ― Approach
Some of these example strategies may not be applicable for every organization or
project.
Since this section is the heart of the test plan, some companies choose to label it
“strategy” rather than “approach.” The approach should contain a description of how
testing will be done (approach) and discuss any issues that have a major impact on
the success of testing and ultimately of the project (strategy). For a master test plan,
the approach to be taken for each level should be discussed including the entrance
and exit criteria from one level to another.
EXAMPLE: System testing will take place in the test labs in our London office. The

testing effort will be under the direction of the London VV&T team, with support from
the development staff and users in our New York office. An extract of production data
from an entire month will be used for the entire testing effort. Test plans, test design
specifications, and test case specifications will be developed using the IEEE/ANSI
guidelines. All tests will be captured using a testing tool for subsequent regression
testing. Tests will be designed and run to test all features listed in section 4 of the
system test plan. Additionally, testing will be done in concert with our Paris office to
test the billing interface. Performance, security, load, reliability, and usability testing will
be included as part of the system test. Performance testing will begin as soon as the
system has achieved stability. All user documentation will be tested in the latter part of
the system test.

© 2013 SQE Training V3.1

54
Many organizations use an “off-the-shelf” methodology; others have either created a
brand new methodology from scratch or have adapted somebody else’s methodology.
In the event that your organization does not have even a rudimentary process,
consider using your next project as a “pilot” project. The decisions, plans, and
documentation generated by this project can be used as a basis for future project
enhancement and improvement.
FYI: A European telecommunications company runs an annual “process sample”

competition. The winning team’s documentation is used as the “sample” appendix in
the company’s process handbook. Along with the prestige that accompanies selection
as this year’s “model,” the team members also receive a cash prize.

© 2013 SQE Training V3.1

56
Perhaps the two most important entrance and exit criteria for a test manager are
The exit criteria for unit/integration testing (i.e., What should development
have done/completed during its testing phase?)
The entrance criteria into system testing (i.e., What can the test group
expect?)

© 2013 SQE Training V3.1

57
If you want to create a simple Web site consisting of only one HTML file, you only
need to upload that one file. On a typical Web site involving dozens, hundreds, or
even thousands of files, however, the process of uploading a Web site becomes more
complicated and time consuming, especially when the Web site runs applications that
need to be built themselves.
A common practice at several software companies is the “daily build and smoke test”
process. Every file is compiled, linked, and uploaded to a test Web site every day, and
the Web site is then put through a “smoke test,” a relatively simple check to see
whether the Web site “smokes” when it’s used.

© 2013 SQE Training V3.1

58
Perhaps the most well-known form of coverage is code coverage. However, there are
other coverage measures:
Requirements coverage attempts to estimate the percentage of business
requirements that are being tested by the current test set.
Design coverage attempts to measure how much of the high level design is
being validated by the current test set.
Interface coverage attempts to estimate the percentage of module interfaces
that are being exercised by the current test set.
Code coverage attempts to measure the percentage of program statements,
branches, or paths that are being executed by the current test set. Code
coverage typically requires the assistance of a special tool.

© 2013 SQE Training V3.1

59
Another topic that should generally be discussed in the approach is how configuration
management will be handled during test. However, it is possible that this could be
handled in a document of its own in some companies.
Configuration management in this context includes change management as well as
the decision-making process used to prioritize bugs. Change management is
important because it is critical to keep track of the version of the software and related
documents that are being tested. There have been many woeful tales of companies
that have actually shipped the wrong (untested) version of the software.
Equally important is the process for reviewing, prioritizing, fixing, and re-testing bugs.
The test environment in some companies is controlled by the developers, which can
be very problematic for test groups. As a rule, programmers want to fix every bug
immediately. It’s as though the programmers feel that if they can fix the bug quickly
enough, it didn’t happen! Testers, on the other hand, are famous for saying that
“testing a spec is like walking on water; it helps if it’s frozen.” Obviously both of the
extremes are counterproductive. If every bug fix is re-implemented immediately, the
testers would never do anything but regression testing. Conversely, if the code is
frozen prematurely, eventually the tests will become unrealistic. The key is to agree on
a process for reviewing, fixing, and implementing bugs back into the test environment.
This process may be very informal during unit and integration test but will probably
need to be much more rigid at higher levels of test.

© 2013 SQE Training V3.1

60
© 2013 SQE Training V3.1

61
© 2013 SQE Training V3.1

62
© 2013 SQE Training V3.1

64
Another strategy issue that should probably be addressed in the test plan is the use of
tools and automation. Testing tools can be a benefit to the development and testing
staff, but they also can spell disaster if their use is not planned. Using some types of
tools can actually require more time to develop, implement, and run a test set the first
time than if the tests were run manually. Using tools, however, may save time during
regression testing, and other types of tools can pay time dividends from the very
beginning.
Rules of thumb for deciding which test cases to automate:
Repetitive tasks (i.e., regression testing)
Longer procedures
Tedious tasks (i.e., code coverage/complexity)
Performance testing
Automate if the test will be run more than x times (e.g., 3, 4, 5, or ?)
Automation issues:
Plan for how to support the methodology
Train in mechanics of tool
Ensure a stable application
Must configure environment

© 2013 SQE Training V3.1

65
Test tool realities:
Many testers are highly interested in tools, but either do not have the time or
do not want to apply effort to use them correctly.
Testers know nothing happens by magic but want to believe test tools will
solve all testing problems.
Tool use must be taught on an ongoing basis. Benefits and requirements of
each tool need to be understood by everyone.
Training must be followed up with assistance and support. Help should be
available by phone.
Tools must be integrated into routine procedures and processes. This
includes simplified job control, software interfaces, etc.

© 2013 SQE Training V3.1

66
9 ― Item Pass/Fail Criteria
Just as every test case needs an expected result, each test item needs to have an
expected result. Typically, pass/fail criteria are expressed in terms of:
Percentage of test cases passed/failed
Number, type, severity, location of defects
Usability
Reliability
Stability
The exact criteria used will vary from level to level and organization to organization. If
you’ve never tried to do this before, you may find it a little frustrating the first time or
two. However, trying to specify “what is good enough” in advance can really help
crystallize the thinking of the various test planners and reduce contention later. If the
software developer is a contractor, this section of the MTP can even have legal
ramifications.
An extreme example of a design pass/fail criteria would be when the number of bugs
reaches a certain predefined level, the entire design is scrapped and a new design is
developed from scratch.

© 2013 SQE Training V3.1

67
10 ― Suspension and Resumption Criteria
The purpose of this MTP section is to identify any conditions that warrant a temporary
suspension of all or some of the testing. Because test execution time is often so
hurried, testers have a tendency to surge forward no matter what happens.
Unfortunately, this often can lead to additional work and a great deal of frustration. For
example, if a group is testing some kind of communications network or switch, there
may come a time when it is no longer useful to continue testing a particular interface if
the protocol to be used is undefined or in flux.
Sometimes, metrics are established to flag a condition that warrants suspending
testing. For example, if a certain predefined number of total defects or defects of a
certain severity are encountered, testing may be halted until a determination can be
made whether to redesign part of the system or try an alternate approach, etc.
Sometimes, suspension criteria is displayed in the form of a Gantt chart (a Gantt chart
is a bar chart that illustrates a project schedule including dependencies).

Examples of suspension criteria include:
The Web server hosting the Web site under test becomes unavailable
The software license for a key testing tool expires
Sample production data to be used for test data is unavailable
Key end-users personnel are unavailable

© 2013 SQE Training V3.1

68
11 ― Testing Deliverables
This is a listing of all documents, tools, and other elements that are to be developed
and maintained in support of the testing effort. Examples include: test plans, test
design specifications, test cases, custom tools, defect reports, test summary reports,
and simulators. The software to be tested is not a test deliverable; that is listed under
“Test Items.”

© 2013 SQE Training V3.1

69
12 ― Testing Tasks
The IEEE defines this section of the Master Test Plan as:
Identify the set of tasks necessary to prepare for and perform testing. Identify all intertask dependencies and any special skills required.
This section can be used to keep a tally of tasks that need to be completed. It is useful
to assign responsibilities/support duties as well.
TIP: Once a task is complete, don’t delete the task from the list. Instead, cross it off to

indicate to anyone unfamiliar with the project that the task has been completed and
not missed.
TIP: Embedding the test names and/or test IDs into the Master Test Plan will allow a

word processor to find where a particular test case is referenced much faster than a
manual “eyeball” search.

© 2013 SQE Training V3.1

70
13 ― Environmental Needs
Hardware Configuration: An attempt should be made to make the platform as
similar to the real world system as possible. If the system is destined to be
run on multiple platforms, a decision must be made whether to replicate all of
these configurations or to replicate only targeted configurations (e.g., the
riskiest, the most common, etc.). When you’re determining the hardware
configuration, don’t forget the system software as well.
Data: Again, it is necessary to identify where the data will come from to
populate the test database/files. Choices might include production data,
purchased data, user-supplied data, generated data, and simulators. It will be
necessary to determine how to validate the data. You should not assume that
even production data is totally accurate. You must also access the fragility of
the data so you know how often to update it!
Interfaces: When planning the test environment, it is very important to
determine and define all interfaces. Occasionally the systems that you must
interface with already exist; in other instances, they may not yet be ready and
all you have to work with is a design specification or some type of protocol. If
the interface is not already in existence, building a realistic simulator may be
part of your testing job.
Facilities, Publications, Security Access, etc: This may seem trivial, but you
must ensure that you have somewhere to test appropriate security clearance
and so forth.

© 2013 SQE Training V3.1

71
14 ― Responsibilities
Using a matrix in this section of the MTP quickly shows major responsibilities such as
establishment of the test environment, configuration management, unit testing, and so
forth.
TIP: It is a good idea to specify the responsible parties by name or by organization.

© 2013 SQE Training V3.1

72
15 ― Staffing and Training Needs
While the actual number of staff required is, of course, dependent on the scope of the
project, schedule, etc., this section of the MTP should be used to describe the number
of people required and what skills they need. You may simply want to say that you
need fifteen journeymen testers and five apprentice testers. Often, however, you will
have to be more specific. It is certainly acceptable to state that you need a special
person: “We must have Jane Doe to help establish a realistic test environment.”
Examples of training needs might include learning about:
How to use a tool
Testing methodologies
Interfacing systems
Management systems, such as defect tracking
Configuration management
Basic business knowledge (related to the system under test), etc.

© 2013 SQE Training V3.1

73
16 ― Schedule
The schedule should be built around the milestones contained in the project plan, such
as delivery dates of various documents and modules, availability of resources, and
interfaces. Then, it will be necessary to add all of the testing milestones. These testing
milestones will differ in level of detail depending on the level of the test plan being
created. In a master test plan, milestones will be built around major events such as
requirements and design reviews, code delivery, completion of user manuals, and
availability of interfaces. In a unit test plan, most of the milestones will be based on the
completion of various programming specs and units.
Initially, it may be necessary to build a generic schedule without calendar dates. This
will identify the time required for various tasks and dependencies without specifying
start and finish dates. Normally, the schedule will be portrayed graphically using a
Gantt chart to show dependencies.
TIP: While doing the initial planning, use a start day of day zero, rather than a specific

date (e.g., May 14). Unfortunately, when specific dates are used, many reviewers
focus on the start and end dates and ignore the middle (i.e., the schedule).

© 2013 SQE Training V3.1

74
© 2013 SQE Training V3.1

75
17 ― Approvals
The approver should be the person (or persons) who can say that the software is
ready to move to the next stage. For example, the approver on a unit test plan might
be the development manager. The approvers on a system test plan might be the
person in charge of the system test and whoever is going to receive the product next
(which may be the customer, if they are going to be doing the acceptance testing). In
the case of the master test plan, there may be many approvers: developers, testers,
customers, QA, configuration management, etc.
You should try to avoid the situation in which you seek the appropriate signatures after
the plan has been completed. If you do get the various parties to sign at that time, all
you have is their autograph (which is fine if they ever become famous and you’re an
autograph collector). Instead, your goal should be to get agreement and commitment,
which means that the approvers should have been involved in the creation and/or
review of the plan during its development. It is part of your challenge as the test
planner to determine how to involve all of the approvers in the test planning process.
TIP: If you have trouble getting the right people involved in writing the test plan,

consider inviting them to a test planning meeting and then publish the minutes of the
meeting as the first draft of the plan.

© 2013 SQE Training V3.1

76
The purpose of the Test Summary Report is to summarize the results of the
designated testing activities and to provide evaluations based on these results.
The IEEE defines a Test Summary Report as being made up of the following sections:
Report Identifier:
Specify the unique identifier assigned to the Test Summary Report.
Summary:
Summarize the evaluation of the test items. Identify the items tested
indicating their versions/revision level. Indicate the environment in which the
testing activities took place. For each item, supply references to the following
documents (if they exist): test plan, test design specifications, test procedure
specifications, test item transmittal reports, test logs, and test incident
reports.
Variances:
Report any variances of the test items from their design specifications.
Indicate any variances from the test plan, test designs, or test procedures.
Specify the reason(s) for each variance.
Comprehensive Assessment:
Evaluate the comprehensiveness of the testing process against the
comprehensiveness criteria specified in the test plan, if the plan exists.
Identify features or feature combinations that were not sufficiently tested and
explain the reasons.

© 2013 SQE Training V3.1

77
© 2013 SQE Training V3.1

78

More Related Content

What's hot

Test Case Prioritization Techniques
Test Case Prioritization TechniquesTest Case Prioritization Techniques
Test Case Prioritization TechniquesKanoah
 
IEEE PSRC - Quality Assurance for Protection and Control Design
IEEE PSRC -  Quality Assurance for Protection and Control DesignIEEE PSRC -  Quality Assurance for Protection and Control Design
IEEE PSRC - Quality Assurance for Protection and Control DesignJose J. Rodriguez Alvarez, MEM
 
'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro GarofaloTEST Huddle
 
Chapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingChapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingNeeraj Kumar Singh
 
Software testing
Software testingSoftware testing
Software testingfatboysec
 
The Business Case for Test Environment Management Services
The Business Case for Test Environment Management ServicesThe Business Case for Test Environment Management Services
The Business Case for Test Environment Management ServicesCognizant
 
ISTQB Test Automation Engineer Answers to Sample Question Paper
ISTQB Test Automation Engineer Answers to Sample Question PaperISTQB Test Automation Engineer Answers to Sample Question Paper
ISTQB Test Automation Engineer Answers to Sample Question PaperNeeraj Kumar Singh
 
Otto Vinter - Analysing Your Defect Data for Improvement Potential
Otto Vinter - Analysing Your Defect Data for Improvement PotentialOtto Vinter - Analysing Your Defect Data for Improvement Potential
Otto Vinter - Analysing Your Defect Data for Improvement PotentialTEST Huddle
 
Lean for Competitive Advantage and Customer Delight
Lean for Competitive Advantage and Customer DelightLean for Competitive Advantage and Customer Delight
Lean for Competitive Advantage and Customer DelightLean India Summit
 
Capital One DevOps Case Study: A Bank with the Heart of Tech Company
Capital One DevOps Case Study: A Bank with the Heart of Tech CompanyCapital One DevOps Case Study: A Bank with the Heart of Tech Company
Capital One DevOps Case Study: A Bank with the Heart of Tech CompanySimform
 
RCA on Residual defects – Techniques for adaptive Regression testing
RCA on Residual defects – Techniques for adaptive Regression testingRCA on Residual defects – Techniques for adaptive Regression testing
RCA on Residual defects – Techniques for adaptive Regression testingIndium Software
 
Ctfl 001 q&a-demo-exam-area
Ctfl 001 q&a-demo-exam-areaCtfl 001 q&a-demo-exam-area
Ctfl 001 q&a-demo-exam-areaSamanthaGreen16
 
Kasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeKasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeTEST Huddle
 

What's hot (20)

Test Case Prioritization Techniques
Test Case Prioritization TechniquesTest Case Prioritization Techniques
Test Case Prioritization Techniques
 
IEEE PSRC - Quality Assurance for Protection and Control Design
IEEE PSRC -  Quality Assurance for Protection and Control DesignIEEE PSRC -  Quality Assurance for Protection and Control Design
IEEE PSRC - Quality Assurance for Protection and Control Design
 
Software testing
Software testingSoftware testing
Software testing
 
'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo'Mixing Open And Commercial Tools' by Mauro Garofalo
'Mixing Open And Commercial Tools' by Mauro Garofalo
 
Software reliability
Software reliabilitySoftware reliability
Software reliability
 
Sdlc
SdlcSdlc
Sdlc
 
Chapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of TestingChapter 1 - Fundamentals of Testing
Chapter 1 - Fundamentals of Testing
 
Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011
 
IV&V Cox Overview
IV&V Cox OverviewIV&V Cox Overview
IV&V Cox Overview
 
Software testing
Software testingSoftware testing
Software testing
 
The Business Case for Test Environment Management Services
The Business Case for Test Environment Management ServicesThe Business Case for Test Environment Management Services
The Business Case for Test Environment Management Services
 
ISTQB Test Automation Engineer Answers to Sample Question Paper
ISTQB Test Automation Engineer Answers to Sample Question PaperISTQB Test Automation Engineer Answers to Sample Question Paper
ISTQB Test Automation Engineer Answers to Sample Question Paper
 
Otto Vinter - Analysing Your Defect Data for Improvement Potential
Otto Vinter - Analysing Your Defect Data for Improvement PotentialOtto Vinter - Analysing Your Defect Data for Improvement Potential
Otto Vinter - Analysing Your Defect Data for Improvement Potential
 
Software Testing ppt
Software Testing pptSoftware Testing ppt
Software Testing ppt
 
Lean for Competitive Advantage and Customer Delight
Lean for Competitive Advantage and Customer DelightLean for Competitive Advantage and Customer Delight
Lean for Competitive Advantage and Customer Delight
 
Chapter 2 - Test Management
Chapter 2 - Test ManagementChapter 2 - Test Management
Chapter 2 - Test Management
 
Capital One DevOps Case Study: A Bank with the Heart of Tech Company
Capital One DevOps Case Study: A Bank with the Heart of Tech CompanyCapital One DevOps Case Study: A Bank with the Heart of Tech Company
Capital One DevOps Case Study: A Bank with the Heart of Tech Company
 
RCA on Residual defects – Techniques for adaptive Regression testing
RCA on Residual defects – Techniques for adaptive Regression testingRCA on Residual defects – Techniques for adaptive Regression testing
RCA on Residual defects – Techniques for adaptive Regression testing
 
Ctfl 001 q&a-demo-exam-area
Ctfl 001 q&a-demo-exam-areaCtfl 001 q&a-demo-exam-area
Ctfl 001 q&a-demo-exam-area
 
Kasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than KnowledgeKasper Hanselman - Imagination is More Important Than Knowledge
Kasper Hanselman - Imagination is More Important Than Knowledge
 

Viewers also liked

The Mindset of Managing Uncertainty: The Key to Agile Success
The Mindset of Managing Uncertainty: The Key to Agile SuccessThe Mindset of Managing Uncertainty: The Key to Agile Success
The Mindset of Managing Uncertainty: The Key to Agile SuccessTechWell
 
A Mind-Blowing Exploration on How to Make Better Decisions
A Mind-Blowing Exploration on How to Make Better DecisionsA Mind-Blowing Exploration on How to Make Better Decisions
A Mind-Blowing Exploration on How to Make Better DecisionsTechWell
 
Key Strategies to Survive the Mega Test Program
Key Strategies to Survive the Mega Test ProgramKey Strategies to Survive the Mega Test Program
Key Strategies to Survive the Mega Test ProgramTechWell
 
Adopt Before You Adapt: Learning Principles through Practice
Adopt Before You Adapt: Learning Principles through PracticeAdopt Before You Adapt: Learning Principles through Practice
Adopt Before You Adapt: Learning Principles through PracticeTechWell
 
How to Break Software: Embedded Edition
How to Break Software: Embedded EditionHow to Break Software: Embedded Edition
How to Break Software: Embedded EditionTechWell
 
Lean Startup Tools for Scrum Product Owners
Lean Startup Tools for Scrum Product OwnersLean Startup Tools for Scrum Product Owners
Lean Startup Tools for Scrum Product OwnersTechWell
 
Introducing the New Software Testing Standard
Introducing the New Software Testing StandardIntroducing the New Software Testing Standard
Introducing the New Software Testing StandardTechWell
 
Test Automation Challenges in the Gaming Industry
Test Automation Challenges in the Gaming IndustryTest Automation Challenges in the Gaming Industry
Test Automation Challenges in the Gaming IndustryTechWell
 
Continuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the RescueContinuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the RescueTechWell
 
Get Testing Help from the Crowd
Get Testing Help from the CrowdGet Testing Help from the Crowd
Get Testing Help from the CrowdTechWell
 
Cloud-based Testing: Flexible, Scalable, On-demand, and Cheaper
Cloud-based Testing: Flexible, Scalable, On-demand, and CheaperCloud-based Testing: Flexible, Scalable, On-demand, and Cheaper
Cloud-based Testing: Flexible, Scalable, On-demand, and CheaperTechWell
 
Mobile Testing Success: Real World Strategies and Techniques
Mobile Testing Success: Real World Strategies and TechniquesMobile Testing Success: Real World Strategies and Techniques
Mobile Testing Success: Real World Strategies and TechniquesTechWell
 
Lessons from Busting Organizational Silos
Lessons from Busting Organizational SilosLessons from Busting Organizational Silos
Lessons from Busting Organizational SilosTechWell
 
Test (and More) Patterns for Continuous Software Delivery
Test (and More) Patterns for Continuous Software DeliveryTest (and More) Patterns for Continuous Software Delivery
Test (and More) Patterns for Continuous Software DeliveryTechWell
 
The Role of the Agile Business Analyst
The Role of the Agile Business AnalystThe Role of the Agile Business Analyst
The Role of the Agile Business AnalystTechWell
 
Alan Page: On Testing
Alan Page: On TestingAlan Page: On Testing
Alan Page: On TestingTechWell
 
Implementing Crowdsourced Testing
Implementing Crowdsourced TestingImplementing Crowdsourced Testing
Implementing Crowdsourced TestingTechWell
 

Viewers also liked (17)

The Mindset of Managing Uncertainty: The Key to Agile Success
The Mindset of Managing Uncertainty: The Key to Agile SuccessThe Mindset of Managing Uncertainty: The Key to Agile Success
The Mindset of Managing Uncertainty: The Key to Agile Success
 
A Mind-Blowing Exploration on How to Make Better Decisions
A Mind-Blowing Exploration on How to Make Better DecisionsA Mind-Blowing Exploration on How to Make Better Decisions
A Mind-Blowing Exploration on How to Make Better Decisions
 
Key Strategies to Survive the Mega Test Program
Key Strategies to Survive the Mega Test ProgramKey Strategies to Survive the Mega Test Program
Key Strategies to Survive the Mega Test Program
 
Adopt Before You Adapt: Learning Principles through Practice
Adopt Before You Adapt: Learning Principles through PracticeAdopt Before You Adapt: Learning Principles through Practice
Adopt Before You Adapt: Learning Principles through Practice
 
How to Break Software: Embedded Edition
How to Break Software: Embedded EditionHow to Break Software: Embedded Edition
How to Break Software: Embedded Edition
 
Lean Startup Tools for Scrum Product Owners
Lean Startup Tools for Scrum Product OwnersLean Startup Tools for Scrum Product Owners
Lean Startup Tools for Scrum Product Owners
 
Introducing the New Software Testing Standard
Introducing the New Software Testing StandardIntroducing the New Software Testing Standard
Introducing the New Software Testing Standard
 
Test Automation Challenges in the Gaming Industry
Test Automation Challenges in the Gaming IndustryTest Automation Challenges in the Gaming Industry
Test Automation Challenges in the Gaming Industry
 
Continuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the RescueContinuous Automated Regression Testing to the Rescue
Continuous Automated Regression Testing to the Rescue
 
Get Testing Help from the Crowd
Get Testing Help from the CrowdGet Testing Help from the Crowd
Get Testing Help from the Crowd
 
Cloud-based Testing: Flexible, Scalable, On-demand, and Cheaper
Cloud-based Testing: Flexible, Scalable, On-demand, and CheaperCloud-based Testing: Flexible, Scalable, On-demand, and Cheaper
Cloud-based Testing: Flexible, Scalable, On-demand, and Cheaper
 
Mobile Testing Success: Real World Strategies and Techniques
Mobile Testing Success: Real World Strategies and TechniquesMobile Testing Success: Real World Strategies and Techniques
Mobile Testing Success: Real World Strategies and Techniques
 
Lessons from Busting Organizational Silos
Lessons from Busting Organizational SilosLessons from Busting Organizational Silos
Lessons from Busting Organizational Silos
 
Test (and More) Patterns for Continuous Software Delivery
Test (and More) Patterns for Continuous Software DeliveryTest (and More) Patterns for Continuous Software Delivery
Test (and More) Patterns for Continuous Software Delivery
 
The Role of the Agile Business Analyst
The Role of the Agile Business AnalystThe Role of the Agile Business Analyst
The Role of the Agile Business Analyst
 
Alan Page: On Testing
Alan Page: On TestingAlan Page: On Testing
Alan Page: On Testing
 
Implementing Crowdsourced Testing
Implementing Crowdsourced TestingImplementing Crowdsourced Testing
Implementing Crowdsourced Testing
 

Similar to Essential Test Management and Planning

Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and PlanningTechWell
 
CHAPTER 15Security Quality Assurance TestingIn this chapter yo
CHAPTER 15Security Quality Assurance TestingIn this chapter yoCHAPTER 15Security Quality Assurance TestingIn this chapter yo
CHAPTER 15Security Quality Assurance TestingIn this chapter yoJinElias52
 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategyDaniel Giraldo
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJShildredzr1di
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Ian McDonald
 
Testing throughout the software life cycle (software development models)
Testing throughout the software life cycle (software development models)Testing throughout the software life cycle (software development models)
Testing throughout the software life cycle (software development models)tyas setyo
 
Software Development Models
Software Development ModelsSoftware Development Models
Software Development ModelsEmi Rahmi
 
Software Development Models by Graham et al
Software Development Models by Graham et alSoftware Development Models by Graham et al
Software Development Models by Graham et alEmi Rahmi
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycleEmi Rizki Ayunanda
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementationyogi syafrialdi
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingKumari Warsha Goel
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cyclemuhamad iqbal
 
Testing throughout the software life cycle (test levels)
Testing throughout the software life cycle (test levels)Testing throughout the software life cycle (test levels)
Testing throughout the software life cycle (test levels)tyas setyo
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersTechWell
 
What is Software Testing Lifecycle?
What is Software Testing Lifecycle? What is Software Testing Lifecycle?
What is Software Testing Lifecycle? STEPIN2IT
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comwww.testersforum.com
 
Test driven development
Test driven developmentTest driven development
Test driven developmentNascenia IT
 

Similar to Essential Test Management and Planning (20)

Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
Essential Test Management and Planning
Essential Test Management and PlanningEssential Test Management and Planning
Essential Test Management and Planning
 
CHAPTER 15Security Quality Assurance TestingIn this chapter yo
CHAPTER 15Security Quality Assurance TestingIn this chapter yoCHAPTER 15Security Quality Assurance TestingIn this chapter yo
CHAPTER 15Security Quality Assurance TestingIn this chapter yo
 
Implementing a testing strategy
Implementing a testing strategyImplementing a testing strategy
Implementing a testing strategy
 
International Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJSInternational Journal of Soft Computing and Engineering (IJS
International Journal of Soft Computing and Engineering (IJS
 
Estimating test effort part 1 of 2
Estimating test effort part 1 of 2Estimating test effort part 1 of 2
Estimating test effort part 1 of 2
 
Testing throughout the software life cycle (software development models)
Testing throughout the software life cycle (software development models)Testing throughout the software life cycle (software development models)
Testing throughout the software life cycle (software development models)
 
Software Development Models
Software Development ModelsSoftware Development Models
Software Development Models
 
Software Development Models by Graham et al
Software Development Models by Graham et alSoftware Development Models by Graham et al
Software Development Models by Graham et al
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycle
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementation
 
Some Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software TestingSome Commonly Asked Question For Software Testing
Some Commonly Asked Question For Software Testing
 
Testing throughout the software life cycle
Testing throughout the software life cycleTesting throughout the software life cycle
Testing throughout the software life cycle
 
Testing throughout the software life cycle (test levels)
Testing throughout the software life cycle (test levels)Testing throughout the software life cycle (test levels)
Testing throughout the software life cycle (test levels)
 
Measurement and Metrics for Test Managers
Measurement and Metrics for Test ManagersMeasurement and Metrics for Test Managers
Measurement and Metrics for Test Managers
 
What is Software Testing Lifecycle?
What is Software Testing Lifecycle? What is Software Testing Lifecycle?
What is Software Testing Lifecycle?
 
Testing ppt
Testing pptTesting ppt
Testing ppt
 
Too many files
Too many filesToo many files
Too many files
 
Software testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.comSoftware testing techniques - www.testersforum.com
Software testing techniques - www.testersforum.com
 
Test driven development
Test driven developmentTest driven development
Test driven development
 

More from TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and RecoveringTechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartTechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyTechWell
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowTechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipTechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsTechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationTechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessTechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateTechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessTechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
 

More from TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Testing Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for SuccessTesting Transformation: The Art and Science for Success
Testing Transformation: The Art and Science for Success
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 

Recently uploaded

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 

Recently uploaded (20)

Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 

Essential Test Management and Planning

  • 1. MN PM Tutorial 9/30/2013 1:00:00 PM "Essential Test Management and Planning" Presented by: Rick Craig Software Quality Engineering Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Rick Craig Software Quality Engineering A consultant, lecturer, author, and test manager, Rick Craig has led numerous teams of testers on both large and small projects. In his twenty-five years of consulting worldwide, Rick has advised and supported a diverse group of organizations on many testing and test management issues. From large insurance providers and telecommunications companies to smaller software services companies, he has mentored senior software managers and helped test teams improve their effectiveness.
  • 3. © 2013 SQE Training V3.1 1
  • 4. © 2013 SQE Training V3.1 4
  • 5. © 2013 SQE Training V3.1 5
  • 6. © 2013 SQE Training V3.1 6
  • 7. © 2013 SQE Training V3.1 7
  • 8. The IEEE has two definitions for “Quality”: The degree to which a system, component, or process meets specified requirements The degree to which a system, component, or process meets customer or user needs or expectations The ISO (ISO 8402) defines “Quality” as: The totality of features and characteristics of a product or service that bears on its ability to meet stated or implied needs Philip B. Crosby defines “Quality” as: Conformance to requirements. Requirements must be clearly stated. Measurements determine conformance nonconformance detected is the absence of quality. © 2013 SQE Training V3.1 8
  • 9. Testing is the process of measuring quality. Testing is a lifecycle process, not just a phase of the Software Development Life Cycle (SDLC), which occurs after the completion of coding. The IEEE has two definitions for “testing”: The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component The process of analyzing a software item to detect the difference between existing and required conditions (i.e., bugs) and to evaluate the features of the software items Unfortunately, implied requirements are very easy to get wrong. Often due to political reasons, requirements rarely have “bugs.” When a requirement is deemed to be incorrect, an “enhancement request” is typically raised rather than an incident/defect report. Similarly, in the case of third-party development, the difference between a defect and an enhancement may be a legal issue. © 2013 SQE Training V3.1 9
  • 10. © 2013 SQE Training V3.1 10
  • 11. © 2013 SQE Training V3.1 11
  • 12. For testers to be effective, they have to work closely with the developers. Adopting a “them and us” attitude typically results in the software product being delivered to testing much later in the lifecycle and/or not meeting basic entrance criteria. © 2013 SQE Training V3.1 12
  • 13. “The defect that is prevented doesn’t need repair, examination, or explanation. The first step is to examine and adopt the attitude of defect prevention. This attitude is called, symbolically, zero defects.” — Philip Crosby: Quality is Free (1979) Production bugs cost many times more than bugs discovered earlier in the lifecycle. In some systems the factor may be 10, while in others it may be 1,000 or more. A landmark study done by TRW, IBM, and Rockwell showed that a requirements bug found in production cost on average 100+ times more than one discovered at the beginning of the lifecycle. © 2013 SQE Training V3.1 13
  • 14. Testing (at least in this course!) is not about perfection, only about reasonable risk. The granularity required is both a business and technical issue. © 2013 SQE Training V3.1 14
  • 15. © 2013 SQE Training V3.1 15
  • 16. A methodology (or method) is a process model composed of tasks, work products, and roles for consistently and cost effectively achieving specified objectives. Methodologies should be considered as dynamic guidelines that help the software engineers do their jobs. Methodologies should be periodically reviewed and updated based on the experiences of the development and testing staff. Inflexible methodologies can lead to a disgruntled staff and complicate buy-in. © 2013 SQE Training V3.1 16
  • 17. STEPTM is a testing methodology based on the IEEE guidelines. STEPTM treats testing as a lifecycle of activities that occurs in parallel with the software development lifecycle (SDLC). Most testing is preventive testing and is divided into levels. A level is characterized by the environment in which the testing occurs. The components of the test environment include • Who is doing the testing • Hardware • Software • Data • Interfaces • etc. FYI: The “Acquire” step can mean reusing existing testware or developing new test cases. TIP: If your organization does not have a formal methodology in place, choose a pilot project to develop a Master Test Plan. Use this test plan as the basis of your methodology and then incrementally build upon the initial outline until you have developed a comprehensive customized methodology. © 2013 SQE Training V3.1 17
  • 18. © 2013 SQE Training V3.1 18
  • 19. The Master Test Plan (MTP) should outline how many levels are going to be used and how they are dependent upon each other. © 2013 SQE Training V3.1 19
  • 20. A good testing methodology should embrace all of the points listed above. © 2013 SQE Training V3.1 20
  • 21. The software lifecycle is a series of imperfect transformations. © 2013 SQE Training V3.1 21
  • 22. © 2013 SQE Training V3.1 22
  • 23. From the FDA’s point of view: This is true if testing is regarded as a separate phase conducted at the end of a traditional waterfall development cycle. From SQE’s point of view: This is true when testing is involved throughout the development lifecycle. From our point of view: “Basically, no amount of testing at the end of the project will make bad software good.” © 2013 SQE Training V3.1 23
  • 24. © 2013 SQE Training V3.1 24
  • 25. A level is defined by the collection of hardware, software, documentation, people, and processes that make up a specific testing effort. A test manager may be responsible for a single level or potentially all of the levels specified in the project’s Master Test Plan. © 2013 SQE Training V3.1 25
  • 26. Unit, Integration, System, and Acceptance are the names used by the IEEE for the levels (stages) of test planning. Many other terms also are used to describe these levels. NOTE: Some methods, processes, and terminology use the term “stage” instead of “level”. © 2013 SQE Training V3.1 26
  • 27. How many levels is the right number? Too many – consumes too many resources and often extends a development cycle Too few – too many defects may slip through Wrong ones – consumes resources and allows too many defects to slip through Although there is no “golden rule,” most projects use between three and five levels. Smaller projects may use only one level; large, life-dependent systems may have many more. FYI: The IEEE defines four levels of testing: Acceptance System Integration Unit © 2013 SQE Training V3.1 27
  • 28. Acceptance Testing (the “glue”): A set of tests that, when successfully executed, certify a system meets the user’s expectations Based on the requirements specifications (high-level tests) Often written by the end user/client (can be a problem in a Web environment or in shrink-wrapped software that will be used by millions of unknown users) Ideally built before a single line of code is developed Developed by or approved by the user representative prior to software development Sample test cases serve as models of the requirements The acceptance test set serves as a model of the system Changes, if necessary, must be negotiated – should use very formal configuration management process Ideally, should be short in duration compared to other levels of testing May require significant resources to find/build realistic test data © 2013 SQE Training V3.1 28
  • 29. Typically the most extensive and time consuming level of testing. It should be as comprehensive as time and resources allow. Acceptance testing is often a subset of system testing, but the biggest difference is who does the testing. System testing considerations: Corrections to defects found New code integration Devices and supporting equipment Files and data Large number of test cases: Hundreds and even thousands not uncommon Starts with functional testing Includes test cases intended to create failures Includes test cases designed to stress and even break the system Focus on reliability and operations: Will the system support operational use? Security, backup, recovery etc. A by-product of the systems test should be the regression test set. A key deliverable, the regression test set is typically a subset of the system’s test cases and should be saved for testing future modifications. TIP: Remember that requirements can be wrong. © 2013 SQE Training V3.1 29
  • 30. A major project development decision that impacts testing is “who owns the interface.” In other words, is the module “caller” or “callee” responsible for ensuring the interface works? If changes to the interface need to be made, who has final say as to what those changes are and when they are implemented? Integration testing is difficult to stage manage. Strategies include Top levels working down Critical software first Bottom levels working up Functional capabilities Build levels Prototypes FYI: Integration testing may be referred to as “string,” “thread,” or “build” testing. It is often conducted in “stages” by the same or different groups of testers. Example integration exit criteria: Integration test cases are documented in accordance with corporate standards All test cases are run; X% must pass No class 1 or 2 defects X% statement coverage Must pass the “smoke” test © 2013 SQE Training V3.1 30
  • 31. Unit testing is the validation of a program module independent from any other portion of the system. The unit test is the initial test of a module. It demonstrates that the module is both functionally and technically sound and is ready to be used as a building block for the application. It is often accomplished with the aid of stub and driver modules which simulate the activity of related modules. Unit testing is typically a development responsibility, but testing must help. The testing team can provide help and guidance in any of the following ways: Determining the purpose of the testing activity and why it is difficult Analyzing programs to identify test cases Defining what is good and bad testing Explaining how to create test case specifications Defining test execution and evaluation procedures Itemizing what records and documentation to retain Discussing the importance of re-testing and the concept of the test data set TIP: Although management support is key, inspections, walkthroughs, and code reviews typically are more beneficial if management is not present during the actual review. FYI: Inspections tend to be more formal than walkthroughs and therefore typically require more training for the participants. © 2013 SQE Training V3.1 31
  • 32. © 2013 SQE Training V3.1 32
  • 33. The easiest way to organize the testing effort and recognize the many planning risks and their associated contingencies (and thereby reduce the projects overall risk) is to use a Master Test Plan (MTP). The test manager should think of the Master Test Plan as one of his or her major communication channels with all project participants. A Master Test Plan ties together all the separate levels into a single cohesive effort. © 2013 SQE Training V3.1 33
  • 34. The written test plan should be a by-product of the test process. The IEEE defines a (master) test plan as: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies the test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. The Master Test Plan is obviously a document, but more importantly it is a thought process. It is a way to get involvement (and have buy-in) from all parties on how testing will occur. If a Master Test Plan is created and no one uses it, did it really help? The creation of the Master Test Plan should generally start as early as possible, ideally in the early stages of project development and/or requirements formulation. © 2013 SQE Training V3.1 34
  • 35. Obviously, the first question you must ask yourself when creating a test plan is “Who is my audience?” The audience for a unit test plan is quite different from the audience for an acceptance test plan or a Master Test Plan—so the wording, use of acronyms, technical terms, and jargons should be adjusted accordingly. Keep in mind that various audiences have different tolerances for what they will and will not read. Executives may not be willing to read an entire master test plan if it is fifty pages long, so you may have to consider an executive summary. Come to think about it, you might want to avoid making the plan prohibitively long or no one will read (or use) it. If your plan is too long, it may be necessary to break it into several plans of reduced scope (possibly based around subsystems or functionality). Sometimes, the size of plans can be kept in check by the judicious use of references. But please proceed carefully—most people don’t really want to gather a stack of documents just so they can read a single plan. The audience of a Master Test Plan usually includes developers, testers, users, the project sponsor, and other stakeholders. Often, the author of a Master Test Plan will be the manager of the test group (if one exists), but it also could be the project manager (the MTP should ultimately form part of the project plan) or the user’s technical representative. © 2013 SQE Training V3.1 35
  • 36. This is the outline of the Master Test Plan template as defined in the IEEE 829-2008 standard. The IEEE templates should be thought of as guidelines only. Feel free to change, add, delete sections as you see fit. The template on the next page is the one I usually use. It combines most of the things found in the template on this page with some of the sections the IEEE 829-2008 only includes in the level-specific test plan template. © 2013 SQE Training V3.1 36
  • 37. What is it? A document (or series of documents) that is outlined during project planning and is expanded and reviewed during a project to guide and control all testing efforts within the project. Why have it? It is the primary means by which the test manager exerts influence by: Raising testing issues Defining testing work Coordinating the work of others Gaining management approval Controlling what happens Note that item 6 “Software Risks” and item 7 “Planning Risks and Contingencies” appear as a single section in the IEEE template. TIP: A table of contents (TOC), glossary, and index make good additions to the IEEE standard test plan. Risks and contingencies are often restricted to just planning risks and contingencies. Some organizations have a section called “Assumptions.” Assumptions that do not occur are really planning risks. The IEEE template should be considered only a guide. Sections should be changed, added, or deleted to meet your organization’s objectives. In some cases, the plan may only be a checklist or even verbal. The above outline is derived from the IEEE 829. © 2013 SQE Training V3.1 37
  • 38. 1 ― Test Plan Identifier In order to keep track of the most current version of your test plan, you will want to assign it an identifying number. If you have a standard documentation control system in your organization, then assigning numbers is second nature to you. TIP: When auditing the testing practices of an organization, always check for the test plan identifier. If there isn’t one, that usually means that the plan was created but never changed (and quite probably never used). The MTP should itself also be the subject of configuration management. 2 ― Introduction The introduction should at least cover: A basic description of the project or release including key features, history, etc. (scope of the project) An introduction to the plan that describes the scope of the plan (what levels, etc.) © 2013 SQE Training V3.1 38
  • 39. 3 ― Test Items This section describes programmatically what is to be tested. If this is a master test plan, this section might talk in very broad terms: “version 2.2 of the accounting software,” “version 1.2 of the users manual,” or “version 4.5 of the requirements spec.” If this is an integration or unit test plan, this section might actually list the programs to be tested, if known. This section should usually be completed in collaboration with the configuration or library manager. FYI: Many MTPs refer to a particular internal “build” of an application rather than the public version number. © 2013 SQE Training V3.1 39
  • 40. 4 ― Features to be Tested This is a listing of what will be tested from the user or customer point of view (as opposed to test items, which are a measure of what to test from the viewpoint of the developer or library manager). For example, if you were system testing an Automated Teller Machine (ATM), features to be tested might include: Password validation Withdraw money Deposit checks Transfer funds Balance inquiries, etc. NOTE: The features to be tested might be much more detailed for lower levels of test. 5 ― Features Not to Be Tested This section is used to record any features that will not be tested and why. There are many reasons that a particular feature might not be tested (e.g., it wasn’t changed, it is not yet available for use, it has a good track record, etc.). Whatever the reason a feature is listed in this section, it all boils down to relatively low risk. Even features that are to be shipped but not yet “turned on” and available for use pose at least a certain degree of risk, especially if no testing is done on them. This section will certainly raise a few eyebrows among managers and users (many of whom cannot imagine consciously deciding not to test a feature), so be careful to document the reason you decided not to test a particular feature. © 2013 SQE Training V3.1 40
  • 41. 6 ― Risk Analysis This session breaks risk analysis into two sections: Software or Product Risks Project or Planning Risks Note: The ISTQB uses the words Product and Project Risk rather than the terms Software and Planning Risks. © 2013 SQE Training V3.1 42
  • 42. The purpose of discussing software risk is to determine what the primary focus of testing should be. Generally speaking, most organizations find that their resources are inadequate to test everything in a given release. Outlining software risks helps the testers prioritize what to test and allows them to concentrate on those areas that are likely to fail or have a large impact on the customer if they do fail. Organizations that work on safety-critical software usually can use the information from their safety and hazard analysis here. However in many other companies no attempt is made to verbalize software risks in any fashion. If your company does not currently do any type of risk analysis, try a brainstorming session among a small group of users, developers, and testers to identify their concerns. The outcome of the software risk analysis should directly impact what you test and in what order you test. Risk analysis is hard, especially the first time you try it, but you will get better at it—and it’s definitely worth the effort. Often, it’s a lot more important what you test than how much you test. © 2013 SQE Training V3.1 43
  • 43. Step 1 – Make an inventory of the system's features and attributes. The level of detail of the inventory is based upon the resources available for the risk assessment and the detail of the test (i.e., system test is more detailed than acceptance test). All features/attributes do not necessarily have to be at the same level of detail. FYI: A feature is a user function; an attribute is a system characteristic. © 2013 SQE Training V3.1 44
  • 44. Step 2 – Determine the likelihood of the feature or attribute failing. Once the inventory has been built, the next step is to assign a “likelihood of something going wrong” to each of the features and attributes identified in the inventory (this is often achieved by conducting a “brainstorming” session). While some organizations like to use percentages, number of days/years between occurrences, or even probability “half lives,” often using a set of simple categories such as the ones listed in the slide above provide sufficient accuracy. If the likelihood of something going wrong is none or zero, then this item may be removed from the analysis. However, the removal should be documented. Step 3 – Determine the impact on the business (not just the IT department) if the feature or attribute were to fail. If the impact of the feature or attribute failing is trivial (or even beneficial), then this item may be removed from the analysis. Again, the removal should be documented. NOTE: While testers, developers, and customer support representatives may have the best “gut feel” for determining which features or attributes are most likely to fail, it is often the line of business (LOB) managers who typically have the best handle on how big a business impact a failure could cause. © 2013 SQE Training V3.1 45
  • 45. Step 4 – Determine the “1st cut” testing priority by multiplying the likelihood and business impact. Multiplying the likelihood and the impact will determine which items have the highest risk. This information then can be used to determine which test cases should be given the highest priority/extensiveness. From the ISTQB Syllabus: Risk can be quantified mathematically when the probability of the occurrence of the risk (P) and the corresponding damage (D) can be quantitatively represented. The risk is calculated from the formula P*D. In most cases the probability and damage cannot be quantified, rather only the tendencies are assignable (e.g., high probability, low probability, higher damage, average damage, etc.) The risk is defined as a graduation within a number of classes or categories. If there are no dependable metrics available, then the analysis is based on personal perceptions, and the results differ, depending on the person making the judgment. For example, the project manager, developer, tester and users all may have different perceptions of risk. The degree of insecurity should be recognizable from the results of the risk analysis, which was used to evaluate the risk. © 2013 SQE Training V3.1 46
  • 46. Web Site Attribute Spelling mistakes Invalid mail-to Viruses received via email Wrong telephone #s Slow performance Poor usability Ugly site Does not work with Browser X Hacker spam attack Site intrusion © 2013 SQE Training V3.1 Business Impact Low (projects bad image) Medium (loss of business) Medium (lost time) High (loss of business) High (loss of business) Medium (some loss of business) Medium (projects bad image) High (loss of business) Medium (server temporarily down) High (unknown) 47
  • 47. Once the items have been prioritized, they can be sorted. Sorting the list of features and attributes provides a clear view of which items need the most attention. TIP: Consider entering the data into a software tool that is “sort friendly” (e.g., use Excel instead of Word). © 2013 SQE Training V3.1 48
  • 48. If time or resources are an issue, then the priority associated with each feature or attribute can be used to determine which test cases should be created and/or run. TIP: Used wisely, a prioritized inventory with a “cut off” point can be powerful when negotiating with senior management. In addition to using the Risk Analysis to determine Test Case/Run priority, the Risk Analysis can be used as a starting point for identifying failure points and subsequently designing test cases to specifically exercise the suspected failure points. This technique often is used by organizations with extremely low risk tolerances (e.g., medical device manufacturers, the military, and space agencies). © 2013 SQE Training V3.1 49
  • 49. 7 – Planning Risks and Contingencies Planning risk can be anything that adversely affects the planned testing effort (schedule, completeness, quality, etc.) The ISTQB refers to these as project risks. © 2013 SQE Training V3.1 50
  • 50. The purpose of identifying planning risks is to allow contingency plans to be developed ahead of time and ready for implementation in case the event occurs. Examples of Planning Risks: Risk: Contingency: Prerequisites: Risk: Contingency: Risk: Contingency: © 2013 SQE Training V3.1 Project start time is slightly delayed, but the delivery date has not changed Staff works overtime Overtime is approved by senior management, and staff have stated willingness to work overtime Microsoft releases a new version of browser halfway through testing (and the delivery date has not changed) Don’t run some of the lower priority test cases for the Web site and re-run the standard smoke test with the new browser Entire testing staff wins state lottery Make sure you are in the syndicate 51
  • 51. There are many contingencies to consider, but in most cases they will all fall into one of the categories shown above. For example, reducing testing or development time is the same as reducing quality, while increasing resources could include users, developers, contractors, or just overtime, etc. Many organizations have made a big show of announcing their commitment to quality with quality circles, quality management, total quality management (TQM), etc. Unfortunately, in the software world many of these same organizations have demonstrated that their only true commitment is to the schedule. Many software projects have schedules that are at best ambitious and at worst impossible. Once an implementation date is set, it is often considered sacred. Customers may have been promised a product on a certain date; management credibility is on the line; corporate reputation is at stake; or the competitors may be breathing down a company’s neck. At the same time, an organization may have stretched its resources to the limit. It is not the purpose of this course to address the many reasons why test managers so often find themselves in this unenviable spot but to discuss what you can do about it. © 2013 SQE Training V3.1 52
  • 52. 8 ― Approach Some of these example strategies may not be applicable for every organization or project. Since this section is the heart of the test plan, some companies choose to label it “strategy” rather than “approach.” The approach should contain a description of how testing will be done (approach) and discuss any issues that have a major impact on the success of testing and ultimately of the project (strategy). For a master test plan, the approach to be taken for each level should be discussed including the entrance and exit criteria from one level to another. EXAMPLE: System testing will take place in the test labs in our London office. The testing effort will be under the direction of the London VV&T team, with support from the development staff and users in our New York office. An extract of production data from an entire month will be used for the entire testing effort. Test plans, test design specifications, and test case specifications will be developed using the IEEE/ANSI guidelines. All tests will be captured using a testing tool for subsequent regression testing. Tests will be designed and run to test all features listed in section 4 of the system test plan. Additionally, testing will be done in concert with our Paris office to test the billing interface. Performance, security, load, reliability, and usability testing will be included as part of the system test. Performance testing will begin as soon as the system has achieved stability. All user documentation will be tested in the latter part of the system test. © 2013 SQE Training V3.1 54
  • 53. Many organizations use an “off-the-shelf” methodology; others have either created a brand new methodology from scratch or have adapted somebody else’s methodology. In the event that your organization does not have even a rudimentary process, consider using your next project as a “pilot” project. The decisions, plans, and documentation generated by this project can be used as a basis for future project enhancement and improvement. FYI: A European telecommunications company runs an annual “process sample” competition. The winning team’s documentation is used as the “sample” appendix in the company’s process handbook. Along with the prestige that accompanies selection as this year’s “model,” the team members also receive a cash prize. © 2013 SQE Training V3.1 56
  • 54. Perhaps the two most important entrance and exit criteria for a test manager are The exit criteria for unit/integration testing (i.e., What should development have done/completed during its testing phase?) The entrance criteria into system testing (i.e., What can the test group expect?) © 2013 SQE Training V3.1 57
  • 55. If you want to create a simple Web site consisting of only one HTML file, you only need to upload that one file. On a typical Web site involving dozens, hundreds, or even thousands of files, however, the process of uploading a Web site becomes more complicated and time consuming, especially when the Web site runs applications that need to be built themselves. A common practice at several software companies is the “daily build and smoke test” process. Every file is compiled, linked, and uploaded to a test Web site every day, and the Web site is then put through a “smoke test,” a relatively simple check to see whether the Web site “smokes” when it’s used. © 2013 SQE Training V3.1 58
  • 56. Perhaps the most well-known form of coverage is code coverage. However, there are other coverage measures: Requirements coverage attempts to estimate the percentage of business requirements that are being tested by the current test set. Design coverage attempts to measure how much of the high level design is being validated by the current test set. Interface coverage attempts to estimate the percentage of module interfaces that are being exercised by the current test set. Code coverage attempts to measure the percentage of program statements, branches, or paths that are being executed by the current test set. Code coverage typically requires the assistance of a special tool. © 2013 SQE Training V3.1 59
  • 57. Another topic that should generally be discussed in the approach is how configuration management will be handled during test. However, it is possible that this could be handled in a document of its own in some companies. Configuration management in this context includes change management as well as the decision-making process used to prioritize bugs. Change management is important because it is critical to keep track of the version of the software and related documents that are being tested. There have been many woeful tales of companies that have actually shipped the wrong (untested) version of the software. Equally important is the process for reviewing, prioritizing, fixing, and re-testing bugs. The test environment in some companies is controlled by the developers, which can be very problematic for test groups. As a rule, programmers want to fix every bug immediately. It’s as though the programmers feel that if they can fix the bug quickly enough, it didn’t happen! Testers, on the other hand, are famous for saying that “testing a spec is like walking on water; it helps if it’s frozen.” Obviously both of the extremes are counterproductive. If every bug fix is re-implemented immediately, the testers would never do anything but regression testing. Conversely, if the code is frozen prematurely, eventually the tests will become unrealistic. The key is to agree on a process for reviewing, fixing, and implementing bugs back into the test environment. This process may be very informal during unit and integration test but will probably need to be much more rigid at higher levels of test. © 2013 SQE Training V3.1 60
  • 58. © 2013 SQE Training V3.1 61
  • 59. © 2013 SQE Training V3.1 62
  • 60. © 2013 SQE Training V3.1 64
  • 61. Another strategy issue that should probably be addressed in the test plan is the use of tools and automation. Testing tools can be a benefit to the development and testing staff, but they also can spell disaster if their use is not planned. Using some types of tools can actually require more time to develop, implement, and run a test set the first time than if the tests were run manually. Using tools, however, may save time during regression testing, and other types of tools can pay time dividends from the very beginning. Rules of thumb for deciding which test cases to automate: Repetitive tasks (i.e., regression testing) Longer procedures Tedious tasks (i.e., code coverage/complexity) Performance testing Automate if the test will be run more than x times (e.g., 3, 4, 5, or ?) Automation issues: Plan for how to support the methodology Train in mechanics of tool Ensure a stable application Must configure environment © 2013 SQE Training V3.1 65
  • 62. Test tool realities: Many testers are highly interested in tools, but either do not have the time or do not want to apply effort to use them correctly. Testers know nothing happens by magic but want to believe test tools will solve all testing problems. Tool use must be taught on an ongoing basis. Benefits and requirements of each tool need to be understood by everyone. Training must be followed up with assistance and support. Help should be available by phone. Tools must be integrated into routine procedures and processes. This includes simplified job control, software interfaces, etc. © 2013 SQE Training V3.1 66
  • 63. 9 ― Item Pass/Fail Criteria Just as every test case needs an expected result, each test item needs to have an expected result. Typically, pass/fail criteria are expressed in terms of: Percentage of test cases passed/failed Number, type, severity, location of defects Usability Reliability Stability The exact criteria used will vary from level to level and organization to organization. If you’ve never tried to do this before, you may find it a little frustrating the first time or two. However, trying to specify “what is good enough” in advance can really help crystallize the thinking of the various test planners and reduce contention later. If the software developer is a contractor, this section of the MTP can even have legal ramifications. An extreme example of a design pass/fail criteria would be when the number of bugs reaches a certain predefined level, the entire design is scrapped and a new design is developed from scratch. © 2013 SQE Training V3.1 67
  • 64. 10 ― Suspension and Resumption Criteria The purpose of this MTP section is to identify any conditions that warrant a temporary suspension of all or some of the testing. Because test execution time is often so hurried, testers have a tendency to surge forward no matter what happens. Unfortunately, this often can lead to additional work and a great deal of frustration. For example, if a group is testing some kind of communications network or switch, there may come a time when it is no longer useful to continue testing a particular interface if the protocol to be used is undefined or in flux. Sometimes, metrics are established to flag a condition that warrants suspending testing. For example, if a certain predefined number of total defects or defects of a certain severity are encountered, testing may be halted until a determination can be made whether to redesign part of the system or try an alternate approach, etc. Sometimes, suspension criteria is displayed in the form of a Gantt chart (a Gantt chart is a bar chart that illustrates a project schedule including dependencies). Examples of suspension criteria include: The Web server hosting the Web site under test becomes unavailable The software license for a key testing tool expires Sample production data to be used for test data is unavailable Key end-users personnel are unavailable © 2013 SQE Training V3.1 68
  • 65. 11 ― Testing Deliverables This is a listing of all documents, tools, and other elements that are to be developed and maintained in support of the testing effort. Examples include: test plans, test design specifications, test cases, custom tools, defect reports, test summary reports, and simulators. The software to be tested is not a test deliverable; that is listed under “Test Items.” © 2013 SQE Training V3.1 69
  • 66. 12 ― Testing Tasks The IEEE defines this section of the Master Test Plan as: Identify the set of tasks necessary to prepare for and perform testing. Identify all intertask dependencies and any special skills required. This section can be used to keep a tally of tasks that need to be completed. It is useful to assign responsibilities/support duties as well. TIP: Once a task is complete, don’t delete the task from the list. Instead, cross it off to indicate to anyone unfamiliar with the project that the task has been completed and not missed. TIP: Embedding the test names and/or test IDs into the Master Test Plan will allow a word processor to find where a particular test case is referenced much faster than a manual “eyeball” search. © 2013 SQE Training V3.1 70
  • 67. 13 ― Environmental Needs Hardware Configuration: An attempt should be made to make the platform as similar to the real world system as possible. If the system is destined to be run on multiple platforms, a decision must be made whether to replicate all of these configurations or to replicate only targeted configurations (e.g., the riskiest, the most common, etc.). When you’re determining the hardware configuration, don’t forget the system software as well. Data: Again, it is necessary to identify where the data will come from to populate the test database/files. Choices might include production data, purchased data, user-supplied data, generated data, and simulators. It will be necessary to determine how to validate the data. You should not assume that even production data is totally accurate. You must also access the fragility of the data so you know how often to update it! Interfaces: When planning the test environment, it is very important to determine and define all interfaces. Occasionally the systems that you must interface with already exist; in other instances, they may not yet be ready and all you have to work with is a design specification or some type of protocol. If the interface is not already in existence, building a realistic simulator may be part of your testing job. Facilities, Publications, Security Access, etc: This may seem trivial, but you must ensure that you have somewhere to test appropriate security clearance and so forth. © 2013 SQE Training V3.1 71
  • 68. 14 ― Responsibilities Using a matrix in this section of the MTP quickly shows major responsibilities such as establishment of the test environment, configuration management, unit testing, and so forth. TIP: It is a good idea to specify the responsible parties by name or by organization. © 2013 SQE Training V3.1 72
  • 69. 15 ― Staffing and Training Needs While the actual number of staff required is, of course, dependent on the scope of the project, schedule, etc., this section of the MTP should be used to describe the number of people required and what skills they need. You may simply want to say that you need fifteen journeymen testers and five apprentice testers. Often, however, you will have to be more specific. It is certainly acceptable to state that you need a special person: “We must have Jane Doe to help establish a realistic test environment.” Examples of training needs might include learning about: How to use a tool Testing methodologies Interfacing systems Management systems, such as defect tracking Configuration management Basic business knowledge (related to the system under test), etc. © 2013 SQE Training V3.1 73
  • 70. 16 ― Schedule The schedule should be built around the milestones contained in the project plan, such as delivery dates of various documents and modules, availability of resources, and interfaces. Then, it will be necessary to add all of the testing milestones. These testing milestones will differ in level of detail depending on the level of the test plan being created. In a master test plan, milestones will be built around major events such as requirements and design reviews, code delivery, completion of user manuals, and availability of interfaces. In a unit test plan, most of the milestones will be based on the completion of various programming specs and units. Initially, it may be necessary to build a generic schedule without calendar dates. This will identify the time required for various tasks and dependencies without specifying start and finish dates. Normally, the schedule will be portrayed graphically using a Gantt chart to show dependencies. TIP: While doing the initial planning, use a start day of day zero, rather than a specific date (e.g., May 14). Unfortunately, when specific dates are used, many reviewers focus on the start and end dates and ignore the middle (i.e., the schedule). © 2013 SQE Training V3.1 74
  • 71. © 2013 SQE Training V3.1 75
  • 72. 17 ― Approvals The approver should be the person (or persons) who can say that the software is ready to move to the next stage. For example, the approver on a unit test plan might be the development manager. The approvers on a system test plan might be the person in charge of the system test and whoever is going to receive the product next (which may be the customer, if they are going to be doing the acceptance testing). In the case of the master test plan, there may be many approvers: developers, testers, customers, QA, configuration management, etc. You should try to avoid the situation in which you seek the appropriate signatures after the plan has been completed. If you do get the various parties to sign at that time, all you have is their autograph (which is fine if they ever become famous and you’re an autograph collector). Instead, your goal should be to get agreement and commitment, which means that the approvers should have been involved in the creation and/or review of the plan during its development. It is part of your challenge as the test planner to determine how to involve all of the approvers in the test planning process. TIP: If you have trouble getting the right people involved in writing the test plan, consider inviting them to a test planning meeting and then publish the minutes of the meeting as the first draft of the plan. © 2013 SQE Training V3.1 76
  • 73. The purpose of the Test Summary Report is to summarize the results of the designated testing activities and to provide evaluations based on these results. The IEEE defines a Test Summary Report as being made up of the following sections: Report Identifier: Specify the unique identifier assigned to the Test Summary Report. Summary: Summarize the evaluation of the test items. Identify the items tested indicating their versions/revision level. Indicate the environment in which the testing activities took place. For each item, supply references to the following documents (if they exist): test plan, test design specifications, test procedure specifications, test item transmittal reports, test logs, and test incident reports. Variances: Report any variances of the test items from their design specifications. Indicate any variances from the test plan, test designs, or test procedures. Specify the reason(s) for each variance. Comprehensive Assessment: Evaluate the comprehensiveness of the testing process against the comprehensiveness criteria specified in the test plan, if the plan exists. Identify features or feature combinations that were not sufficiently tested and explain the reasons. © 2013 SQE Training V3.1 77
  • 74. © 2013 SQE Training V3.1 78