Toss out the notion that you need to add more QA testers. Automation may not be enough to turn your team around. Achieving the velocity and delivery you need in your development projects requires tracking the right key indicators so that you can unlock your team's full potential. Matt Angerer, ResultsPositive Senior Solutions Architect, has more in this focused and insightful presentation.
2. This KPI is important for Test Managers because it helps them monitor the test design
activity of their Business Analysts and Testing Engineers. As new requirements are written, it’s important
to develop associated system tests and decide whether those test cases should be flagged for your
regression test suite. In other words, is the test that your Test Engineer is writing going to cover a critical
piece of functionality in your Application Under Testing (AUT)? If yes, then flag it for your regression
testing suite and slot it for automation. If no, then add it to the bucket of manual tests that can be
executed ad hoc when necessary. Our suggestion is to track the “Authored Tests” in relation to the number
of Requirements for a given IT project. In other words, if you subscribe to the philosophy that every
requirement should have test coverage (i.e., an associated test), then you should set the threshold
for this KPI to equal the number of requirements or user stories outlined for a sprint. That would
equate to one (1) test case for every requirement in “Ready” status.
Authored Texts1
3. We have to admit that this is a tricky KPI to track.
Opinions abound on what to automate vs. what
not to automate, as well as the costs associated
with maintaining the automation of system test
cases. Generally speaking, the more automated
tests you have in place – the more likely it is that
you’ll trap critical defects introduced to your
software delivery stream. What we would suggest
doing with this KPI is to start small and adjust
upwards as your QA team evolves and matures.
Set a threshold that 20% of test cases should be
automated. Tracking this in HP ALM testing is
simple to do through Project Planning and Track-
ing (PPT) – which is not available in HP Quality
Center Enterprise Edition.
2 Automated Tests
4. Tracking active defects is a pretty simple KPI that you should be monitoring regardless. The Active
Defects KPI is better when the values are lower. Every software IT project comes with its fair share
of defects. Depending on the magnitude and complexity of the project, I have seen 250+ defects
active at any given time. The word“active” for this KPI could mean the status is either new, open, or
fixed (and waiting for re-test). Basically, if the defect is getting “worked”, then it’s active. As a Test
Manager, you should set the threshold based on historical data of the IT projects you have oversight
on. Whether that’s 100 defects, 50 defects, or 25 defects – your threshold will determine when it is
okay and when it is not okay. Anything above the threshold you set is “Not OK” and should be
flagged for immediate action.
Tracking Active Defects3
5. As a former QA Test Manager, this is by far my
favorite KPI to track. Here we'll track the per-
centage of requirements covered by at least
one test. One hundred percent test coverage
should be the goal for your QA organization in
2016. The validity of a requirement hinges on
whether a test exists to prove whether it works
or not. The same holds true for a test that lives
in your test plan. The validity of that test
hinges upon whether it was designed to test
out a requirement. If it’s not traced back up to
a requirement, why do you need the test?
Every day as a Test Manager you should moni-
tor this KPI and then question the value of
orphaned requirements and orphaned tests. If
they are orphaned, find them a home by
tracing them to a specific requirement.
4 Covered Requirements
6. Defects Fixed Per Day5
Don’t lose sight of how efficiently your development counterparts are working to rectify the defects
brought to their attention. The Defects Fixed Per Day KPI ensures that your development team is
hitting the “standard” when it comes to turning around fixes and keeping the build moving forward.
7. Measuring your passed requirements is an
effective method of taking the pulse on a given
testing cycle. It is also a good measure to
consider during a Go/No-Go meeting for a large
release.
6 Passed Requirements
8. Sometimes you need to look beyond the requirements level and peer into the execution of every
test configuration within a test. A test configuration is basically a permeation of a test case that
inputs different data values. The Passed Tests KPI is complimentary to your Passed Requirements
KPI and helps you understand how effective your test configurations are in trapping defects. Keep in
mind that you can be quickly fooled into thinking you have a quality build on your hands with this
KPI if you don’t have a good handle on the test design process. Low quality test cases often yield
passing results when in fact there are still issues with the build. Make sure that your team is diligent
in exercising different branches of logic when designing test cases and this KPI will be ofmore value.
Passed Tests7
9. The Rejected Defects KPI is known for its ability
to identify a training opportunity for our Soft-
ware Testing Engineers. Think about it for a
minute. If your development team is rejecting
a high number of defects with a comment like
“works as designed”, maybe you should take
your team through the design documentation
of the application under test. No more than
5% of the defects submitted should ever be
rejected.
8 Rejected Defects
10. The Reviewed Requirements KPI is more of a “Prevention KPI” rather than a “Detection KPI.” If you
have noticed, several of the KPIs we have listed focus on the detection of defects, rather than how
they can be prevented in ALM testing. However, this KPI focuses on identifying which requirements
(or user stories) have been reviewed for ambiguity. As we know, ambiguous requirements lead to
bad design decisions and ultimately wasted resources. As a QA or Testing Manager, it is your
responsibility to monitor whether each of the requirements has been reviewed by a subject matter
expert (SME) within your organization who truly understands the business process that the
technology is supporting.
Reviewed Requirements9
11. We see too many of our clients get hung up on
the severity level of defects. It’s a great KPI to
monitor, but make certain that your team
employs checks and balances when setting the
severity of a defect. After you ensure the nece-
sary checks and balances are in place, you can
set a threshold for this KPI. If a defect status is
Urgent or Very High, count it against this KPI. If
the total count exceeds 10, throw a red flag.
1 Severe Defects0
12. Test Instances Executed11
This KPI only relates to the velocity of your test execution plan. It doesn't provide insight into the quality of
your build, instead shedding light on the percentage of total instances available in a test set. Think of it as
a balance sheet for your test instances in the TEST LAB of HP ALM testing. As a Test Manager, you can
monitor this KPI along with a test execution burn down chart to gauge whether additional testers may be
required for projects with a large manual testing focus.
13. Building this KPI in HP ALM is a way to look
beyond the Test Instances and monitor all
different types of test execution, including
manual, automated, etc. This shouldn’t be your
only tool to monitor velocity during a given
sprint or test execution cycle. You should also
pay close attention to the KPIs described
above. This KPI is more or less a velocity KPI,
whereas a few of the ones outlined above help
you monitor “preventative measures” while
comparing them to “detection measures.”
Tests Executed21