SlideShare una empresa de Scribd logo
1 de 111
Common Testing Problems –
Pitfalls to Prevent and Mitigate:
Descriptions, Symptoms, Consequences, Causes,
and Recommendations
Donald G. Firesmith




Page 1 of 111
© 2013 by Carnegie Mellon University
Common Testing Problems: Pitfalls to Prevent and Mitigate                                                                 25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations



                                          Table of Contents
1     Introduction ........................................................................................................................... 5
    1.1     Usage ................................................................................................................................ 5
    1.2     Problem Specifications ..................................................................................................... 6
    1.3     Problem Interpretation...................................................................................................... 6
2     Testing Problems ................................................................................................................... 8
    2.1     General Testing Problems ................................................................................................ 8
      2.1.1        Test Planning and Scheduling Problems................................................................... 8
      2.1.2        Stakeholder Involvement and Commitment Problems ........................................... 17
      2.1.3        Management-related Testing Problems .................................................................. 21
      2.1.4        Test Organization and Professionalism Problems .................................................. 28
      2.1.5        Test Process Problems ............................................................................................ 32
      2.1.6        Test Tools and Environments Problems ................................................................. 45
      2.1.7        Test Communication Problems ............................................................................... 54
      2.1.8        Requirements-related Testing Problems ................................................................. 60
    2.2     Test Type Specific Problems.......................................................................................... 70
      2.2.1        Unit Testing Problems ............................................................................................ 71
      2.2.2        Integration Testing Problems .................................................................................. 72
      2.2.3        Specialty Engineering Testing Problems ................................................................ 74
      2.2.4        System Testing Problems ........................................................................................ 82
      2.2.5        System of Systems (SoS) Testing Problems ........................................................... 84
      2.2.6        Regression Testing Problems .................................................................................. 89
3     2BConclusion ....................................................................................................................... 97
    3.1     Testing Problems ............................................................................................................ 97
    3.2     Common Consequences ................................................................................................. 97
    3.3     Common Solutions ......................................................................................................... 98
4     Potential Future Work ..................................................................................................... 100
5     Acknowledgements ........................................................................................................... 101



© 2012-2013 by Carnegie Mellon University                                                                                     Page 2 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate           25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations




© 2012-2013 by Carnegie Mellon University                             Page 3 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations

                                         Abstract
This special report documents the different types of problems that commonly occur when testing
software-reliant systems. These 77 problems are organized into 14 categories. Each of these
problems is given a title, description, a set of potential symptoms by which it can be recognized,
a set of potential negative consequences that can result if the problem occurs, a set of potential
causes for the problem, and recommendations for avoiding the problem or solving the should it
occur.




© 2012-2013 by Carnegie Mellon University                                           Page 4 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations

1 Introduction
Many testing problems can occur during the development or maintenance of software-reliant
systems and software applications. While no project is likely to be so poorly managed and
executed as to experience the majority of these problems, most projects will suffer several of
them. Similarly, while these testing problems do not guarantee failure, they definitely pose
serious risks that need to be managed.
Based on over 30 years of experience developing systems and software as well as
performingnumerous independent technical assessments, this technical report documents 77
problems that have been observed to commonly occur during testing. These problems have
beencategorized as follows:
• General Testing Problems
        Test Planning and Scheduling Problems
        Stakeholder Involvement and Commitment Problems
        Management-related Testing Problems
        Test Organization and Professionalism Problems
        Test Process Problems
        Test Tools and Environments Problems
        Test Communication Problems
        Requirements-related Testing Problems
• Testing Type Specific Problems
        Unit Testing Problems
        Integration Testing Problems
        Specialty Engineering Testing Problems
        System Testing Problems
        System of Systems (SoS) Problems
        Regression Testing Problems

1.1 Usage
Theinformation describing each of the commonly occurring testing problems can be used:
• To improve communication regarding commonly occurring testing problems
• As training materials for testers and the stakeholders of testing
• As checklists when:
       Developing and reviewing an organizational or project testing process or strategy
       Developing and reviewing test plans, the testing sections of system engineering
       management plans (SEMPs), and software development plans (SDPs)
       Evaluating the testing-related parts of contractor proposals
       Evaluating test plans and related documentation (quality control)

© 2012-2013 by Carnegie Mellon University                                           Page 5 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                           25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
       Evaluating the actual as-performed testing process during the oversight 1 (quality      0F




       assurance)
       Identifying testing risks and appropriate risk mitigation approaches
• To categorize testing problems for metrics collection, analysis, and reporting
• As an aid to identify testingareas potentially needing improvement during project post
   mortems (post implementation reviews)
Although each of these testing problems has been observed on multiple projects, it is entirely
possible that you may have testing problems not addressed by this document.

1.2 Problem Specifications
The following tables document each testing problem with the following information:
• Title – a short descriptive nameof the problem
• Description – a brief definition of the problem
• Potential Symptoms (how you will know) –potential symptoms that indicate possible
   existence of the problem
• Potential Consequences (why you should care) –potential negative consequences to expect
   if the problem is not avoided or solved2
• Potential Causes –potential root and proximate causes of the problem3
• Recommendations (what you should do) –recommended (prepare, enable, perform, and
   verify) actions to take to avoid or solve the problem4
• Related Problems – a list of links to other related testing problems

1.3 Problem Interpretation
The goal of testing is not to prove that something works, but rather to demonstrate that it does
not. 5A good tester assumes that there are always defects (an extremely safe assumption) and
     2F




1
    Not all testing problems have the same probability or harm severity. These problem specifications are not
    intended to be used as part of a quantitative scoring scheme based on the number of problems found. Instead,
    they are offered to support qualitative review and planning.
2
    Note that the occurrence of a potential consequence may be a symptom by which the problem is recognized.
3
    Causes are important because recommendations should be based on the causes. Also, recommendation to address
    root causes may be more important than proximate causes, because recommendations addressing proximate
    causes may not combat the root cause and therefore may not prevent the problem under all circumstances.
4
    Some of the recommendations may no longer be practical after the problem rears its ugly head. It is usually much
    easier to avoid the problem or nip it in the bud instead of fixing it when the project is well along or near
    completion. For example, several possible ways exist to deal with inadequate time to complete testing including
    (1) delay the test completion date and reschedule testing, (2) keep the test completion date and (a) reduce the
    scope of delivered capabilities, (b) reduce the amount of testing, (c) add testers, and (d) perform more parallel
    testing (e.g., different types of testing simultaneously). Selection of the appropriate recommendations to follow
    therefore depends on the actual state of the project.
5
    Although tests that pass are often used as evidence that the system (or subsystem) under test meets its (derived
    and allocated) requirements, testing can never be exhaustive for even a simple system and therefore cannot
    “prove” that all requirements are met. However, system and operational testing can provide evidence that the
    system under test is “fit for purpose” and ready to be placed into operation.For example, certain types of testing
© 2012-2013 by Carnegie Mellon University                                                             Page 6 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
seeks to uncover them.Thus, a good test is one that causes the thing being tested to fail so that
the underlying defect(s) can be found and fixed.6
Defects are not restricted to violations of specified (or unspecified) requirements. Some of the
other important types of defects are:
• inconsistencies between the architecture, design, and implementation
• violations of coding standards
• lack of input checking (i.e., unexpected data)
• the inclusion of safety or security vulnerabilities (e.g., the use of inherently unsafe language
    features or lack of verification of input data)




    may provide evidence required for safety and security accreditation and certification. Nevertheless, a tester must
    take a “show it fails” rather than a “show it works” mindsetto be effective.
6
    Note that testing cannot identify all defects because some defects (e.g., the failure to implement missing
    requirements) do not cause the system to fail in a manner detectable by testing.
© 2012-2013 by Carnegie Mellon University                                                             Page 7 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                        25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations

2 Testing Problems
The commonly occurring testing problems documented in this section are categorized as either
general testing problems or testing type specific problems.

2.1 General Testing Problems
The following testing problems can occur regardless of the type of testing being performed:
• Test Planning and Scheduling Problems
• Stakeholder Involvement and Commitment Problems
• Management-related Testing Problems
• Test Organization and Professionalism Problems
• Test Process Problems
• Test Tools and Environments Problems
• Test Communication Problems
• Requirements-related Testing Problems

2.1.1 Test Planning and Scheduling Problems
The following testing problems are related to test planning and estimation:
• GEN-TPS-1 No Separate Test Plan
• GEN-TPS-2 Incomplete Test Planning
• GEN-TPS-3 Test Plans Ignored
• GEN-TPS-4 Test Case Documents rather than Test Plans
• GEN-TPS-5 Inadequate Test Schedule
• GEN-TPS-6 Testing is Postponed

2.1.1.1 GEN-TPS-1 No Separate Test Plan
 Description: There are no separate testing-specific planning document(s).
 Potential Symptoms:
 • Thereisno separate Test and Evaluation Master Plan (TEMP) or System/Software Test Plan
    (STP).
 • Thereareonlyincomplete high-level overviews of testing in System Engineering Master Plan
    (SEMP) and System/Software Development Plan (SDP).
 Potential Consequences:
 • The test planning parts of these other documents arenot written by testers.
 • Testing is not adequately planned.
 • The test plans are not adequately documented.
 • It is difficult or impossible to evaluate the planned testing process.


© 2012-2013 by Carnegie Mellon University                                          Page 8 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                                25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
    •     Testing is inefficiently and ineffectively performed.
    Potential Causes:
    • The customer has not specified the development and delivery of a separate test plan.
    • The system engineering, software engineering, or testing process has not included the
       development of a separate test plan.
    • There was no template for the content and format of a separate test plan.
    • Management, the customer representative, or the testers did not understand the:
           scope, complexity, and importance of testing
           value of a separate test plan
    Recommendations:
    •     Prepare:
              Reuse or create a standard template and content/format standard for test plans.
              Include one or more separateTEMPsand/or STPs as deliverable work products in the
              contract.
              Include the development and delivery of test planning documents in the project master
              schedule (e.g., as part of major milestones).
    •     Enable:
              Provide sufficient resources (staffing and schedule) for the development of one or more
              separate test plans.
    •     Perform:
              Develop and deliver one or more separateTEMPsand/or STPs.
    •     Verify:
              Verify the existence and delivery of one or more separate test planning documents.
              Do not accept incomplete high-level overviews of testing in the SEMP and/orSDP as the
              only test planning documentation.

2.1.1.2 GEN-TPS-2 Incomplete Test Planning
    Description: The test planning documents are incomplete.
    Potential Symptoms:
    • The test planning documents are incomplete, missing some or all7 of the:
           references – listing of all relevant documents influencing testing
           test goals and objectives – listing the high-level goals and subordinate objectives of the
           testing program
           scope of testing – listing the component(s), functionality, and/or capabilities to be


7
        This does not mean that every test plan must include all of this information; test plans should include only the
        information that is relevant for the current project. It is quite reasonable to reuse much/most of this information
        in multiple test plans; just because it is highly reusable does not mean that it is meaningless boilerplate that can
        be ignored. Test plans can be used to estimate the amount of test resources (e.g., time and tools) needed as well
        as the skills/expertise that the testers need.
© 2012-2013 by Carnegie Mellon University                                                                   Page 9 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                      25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
          tested (and any that are not to be tested)
          test levels – listing and describing the relevant levels of testing (e.g., unit, subsystem
          integration, system integration, system, and system of systems testing)
          test types – listing and describing the types of testing such as:
              blackbox, graybox, and whitebox testing
              developmental vs. acceptance testing
              initial vs. regression testing
              manual vs. automated
              mode-based testing (system start-up 8, operational mode, degraded mode, training
                                                         7F




              mode, and system shutdown)
              normal vs. abnormal behavior (i.e., nominal vs. off-nominal, sunny day vs. rainy day
              use case paths)
              quality criteria based testing such as availability, capacity (e.g., load and stress
              testing), interoperability, performance, reliability, robustness9, safety, security (e.g.,
              penetration testing), and usability testing
              static vs. dynamic testing
              time- or date-based testing
          testing methods and techniques – listing and describing the planned testing methods
          and techniques (e.g., boundary value testing, penetration testing, fuzz testing, alpha and
          beta testing) to be used including the associated:
              test case selection criteria – listing and describing the criteria to be used to select
              test cases (e.g., interface-based, use-case path,boundary value testing, and error
              guessing)
              test entrance criteria – listing the criteria that must hold before testing should
              begin
              test exit/completion criteria – listing the test completion criteria (e.g., based on
              different levels of code coverage such as statement, branch, condition coverage)
              test suspension and resumption criteria
          test completeness and rigor – describing how the rigor and completeness of the testing
          varies as a function of mission-, safety-, and security-criticality
          resources:
              staffing – listingthe different testing roles and teams, their responsibilities, their
              associated qualifications (e.g., expertise, training, and experience), and their
              numbers
              environments – listing and description of required computers (e.g., laptops and
              servers), test tools (e.g., debuggers and test management tools), test environments
              (software and hardware test beds), and test facilities
          testing work products – listing and describing of the testing work products to be


8
    This includes combinations such as the testing of system start-up when hardware/software components fail.
9
    This includes the testing of error, fault, and failure tolerance.
© 2012-2013 by Carnegie Mellon University                                                        Page 10 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
        produced or obtained such as test documents (e.g., plans and reports), test software (e.g.,
        test drivers and stubs), test data (e.g., inputs and expected outputs), test hardware, and
        test environments
        testing tasks – listing and describing the major testing tasks (e.g., name, objective,
        preconditions, inputs, steps, postconditions, and outputs)
        testing schedule – listing and describing the major testing milestones and activities in
        the context of the project development cycle, schedule, and major project milestones
        reviews, metrics, and status reporting – listing and describing the test-related reviews
        (e.g., Test Readiness Review), test metrics (e.g., number of tests developed and run),
        and status reports (e.g., content, frequency, and distribution)
        dependencies of testing on other project activities – such as the need to incorporate
        certain hardware and software components into test beds before testing using those
        environments can begin
        acronym list and glossary
 Potential Consequences:
 • Testers and stakeholders in testing may not understand the primary objective of testing (i.e.,
    to find defects so that they can be fixed).
 • Some levels and types of tests may notbe performed, allowing certain types of residual
    defects to remain in the system.
 • Some testing may be ad hoc and therefore inefficient and ineffectual.
 • Mission-, safety-, and security-critical software may not be sufficiently tested to the
    appropriate level of rigor.
 • Certain types of test cases may be ignored, resulting in related residual defects in the tested
    system.
 • Test completion criteria may be based more on schedule deadlines than on the required
    degree of freedom from defects.
 • Adequate amounts of test resources (e.g., e.g., testers, test tools, environments, and test
    facilities) may not be made available because they are not in the budget.
 • Some testers may not have adequate expertise, experience, and skills to perform all of the
    types of testing that needs to be performed.
 Potential Causes:
 • There were no templates or content and format standards for separate test plans.
 • The associated templates or content and format standardswere incomplete.
 • The test planning documents were written by people (e.g., managers or developers) who did
    not understand the scope, complexity, and importance of testing.
 Recommendations:
 • Prepare:
       Reuse or create a standard template and/or content/format standard for test plans.
 • Enable:
       Provide sufficient resources (staffing and schedule) to develop complete test plan(s).
 • Perform:

© 2012-2013 by Carnegie Mellon University                                          Page 11 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
        Use a proper template and/orcontent/format standard to develop the test plans (i.e., ones
        that are derived from test plan standards and tailored if necessary for the specific
        project).
 •   Verify:
        Verify during inspections/reviews that all test plans are sufficiently complete
        Do not accept incomplete test plans.
 Related Problems:GEN-TOP-2 Unclear Testing Responsibilities, GEN-PRO-8 Inadequate
 Test Evaluations, GEN-TTE-7 Tests Not Delivered, TTS-SPC-1 Inadequate Capacity
 Requirements, TTS-SPC-2 Inadequate Concurrency Requirements, TTS-SPC-3 Inadequate
 Performance Requirements, TTS-SPC-4 Inadequate Reliability Requirements, TTS-SPC-5
 Inadequate Robustness Requirements, TTS-SPC-6 Inadequate Safety Requirements, TTS-SPC-
 7 Inadequate Security Requirements, TTS-SPC-8 Inadequate Usability Requirements, TTS-
 SoS-1 Inadequate SoS Planning, TTS-REG-5 Disagreement over Maintenance Test Resources

2.1.1.3 GEN-TPS-3Test Plans Ignored
 Description: The test plans are ignored once developed and delivered.
 Potential Symptoms:
 • The way the testers perform testing is not consistent with the relevant test plan(s).
 • The test plan(s) are never updated after initial delivery shortly after the start of the project.
 Potential Consequences:
 •   Management may not have budgeted sufficient funds to the pay for the necessary test
     resources e.g., testers, test tools, environments, and test facilities).
 •   Management may not have made adequate amounts of test resources available because they
     are not in the budget.
 •   Testers will not have an approved document that justifies:
         their request for additional needed resources when they need them
         their insistence that certain types of testing is necessary and must not be dropped when
         the schedule becomes tight
 •   Some testers may not have adequate expertise, experience, and skills to perform all of the
     types of testing that needs to be performed.
 •   The test plan may not be maintained.
 •   Some levels and types of tests may not be performed so that certain types of residual defects
     to remain in the system.
 •   Some important test cases may not be developed and executed.
 •   Mission-, safety-, and security-critical software may not be sufficiently tested to the
     appropriate level of rigor.
 •   Test completion criteria may be based more on schedule deadlines than on the required
     degree of freedom from defects.
 Potential Causes:

© 2012-2013 by Carnegie Mellon University                                             Page 12 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                        25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •   The testers may have forgotten some of the test plan contents.
 •   The testers may have thought that the only reason a test plan was developed was because it
     was a deliverable in the contract that needed to be check off.
 •   The test plan(s) may be so incomplete and at such a generic high level of abstraction as to
     be relatively useless.
 Recommendations:
 • Prepare:
       Have project management (both administrative and technical), testers, and quality
       assurance personnel read and review the test plan.
       Have management (acquisition and project) sign off on the completed test plan
       document.
       Use the test plan as input to the project master schedule and work breakdown schedule
       (WBS).
 • Enable:
       Develop a short check list from the test plan(s) for use when assessing the performance
       of testing.
 • Perform:
       Have the test manager periodically review the test work products and as-performed test
       process against the test plan(s).
       Have the test team update the test plan(s) as needed.
 • Verify:
       Have the testers present their work and status at project and test-team status meetings.
       Have quality engineering periodically review the test work products (quality control)
       and as performed test process (quality assurance).
       Have progress, productivity, and quality test metrics collected, analyzed, and reported to
       project and customer management.
 Related Problems:GEN-TPS-2Incomplete Test Planning

2.1.1.4 GEN-TPS-4Test Case Documents rather than Test Plans
 Description: Test case documents documenting specific test cases are labeled test plans.
 Potential Symptoms:
 • The “test plan(s)” contain specific test cases including inputs, test steps, expected outputs,
    and sources such as specific requirements (blackbox testing) or design decisions (whitebox
    testing).
 • The test plans do not contain the type of general planning information listed in GEN-TPS-2
    Incomplete Test Planning.
 Potential Consequences:
 • All of the negative consequences of GEN-TPS-2 Incomplete Test Planning may occur.
 • The test case documents may not be maintained.


© 2012-2013 by Carnegie Mellon University                                         Page 13 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Potential Causes:
 • There may have been no template or content format for the test case documents.
 • The test plan authors may not have had adequate expertise, experience, and skills to develop
    test plans or know their proper content.
 Recommendations:
 • Prepare:
       Provide the test manager and testers with at least minimal training in test planning.
 • Enable:
       Provide a proper test plan template.
       Provide a proper content and format standard for test plans.
       Add test plans and test case documents to the project technical glossary.
 • Perform:
       Develop the test plan in accordance with the test plan template or content and format
       standard.
       Develop the test case documents in accordance with the test case document template
       and/or content and format standard.
       Where practical, automate the test cases so that the resulting tests (extended with
       comments) replace the test case documents so that the distinction is clear (i.e., the test
       plan is a document meant to be read whereas the test case is meant to be executable).
 • Verify:
       Have the test plan(s) reviewed against the associated template or content and format
       standard prior to acceptance.
 Related Problems:GEN-TPS-2 Incomplete Test Planning

2.1.1.5 GEN-TPS-5 Inadequate Test Schedule
 Description: The testing schedule is inadequate to permit proper testing.
 Potential Symptoms:
 • Testing is significantly incomplete and behind schedule.
 • An insufficient time is allocated in the project master schedule to perform all:
        test activities (e.g., automating testing, configuring test environments, and developing
        test data, test scripts/drivers, and test stubs)
        appropriate tests (e.g., abnormal behavior, quality requirements, regression testing) 10               8F




 • Testers are working excessively and unsustainably long hours and days per week in an
    attempt to meet schedule deadlines.


10
     Note that an agile (i.e., iterative, incremental, and concurrent) development/life cycle greatly increases the
     amount of regression testing needed (although this increase in testing can be largely offset by highly automating
     regression tests). Although testing can never be exhaustive, more time is typically needed for adequate testing
     unless testing can be made more efficient. For example, fewer defects could be produced and these defects could
     be found and fixed earlier and thereby be prevented from reaching the current iteration.
© 2012-2013 by Carnegie Mellon University                                                           Page 14 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Potential Consequences:
 • Testers are exhausted and therefore making an unacceptably large number of mistakes.
 • Tester productivity (e.g., importance of defects found and number of defects found per unit
    time) is decreasing.
 • Customer representatives, managers, and developers have a false sense of security that the
    system functions properly.
 • There is a significant probability that the system or software will be delivered late with an
    unacceptably large number of residual defects.
 Potential Causes:
 • The overall project schedule was insufficient.
 • The size and complexity of the system were underestimated.
 • The project master plan was written by people (e.g., managers, chief engineers, or technical
    leads) who do not understand the scope, complexity, and importance of testing.
 • The project master plan was developed without input from the test team(s).
 Recommendations:
 • Prepare:
       Provide evidence-based estimates of the amount of testing and associated test effort that
       will be needed.
       Ensure that adequate time for testing is included in the program master schedule and test
       team schedules including the testing of abnormal behavior and the specialty engineering
       testing of quality requirements (e.g., load testing for capacity requirements and
       penetration testing for security requirements). 11             9F




       Provide adequate time for testing in change request estimates.
 • Enable:
       Deliver inputs to the testing process (e.g., requirements, architecture, design, and
       implementation) earlier and more often (e.g., as part of an incremental, iterative, parallel
       – agile – development cycle).
       Provide sufficient test resources (e.g., number of testers, test environments, and test
       tools).
       If at all possible, do not reduce the testing effort in order to meet a delivery deadline.
 • Perform:
       Automate as much of the regression testing as is practical, and allocate sufficient
       resources to maintain the automated tests. 12            10F




 • Verify:
       Verify that amount of time scheduled for testing is consistent with the evidence-based

11
     Also integrate the testing process into the software development process.
12
      When there is insufficient time to perform manual testing, it may be difficult to justify the automation of these
      tests. However, automating regression testing is not just a maintenance issue. Even during initial development,
      there should typically be a large amount of regression testing, especially if an iterative and incremental
      development cycle is used. Thus, ignoring the automation of regression testing is often a case of being penny
      wise and pound foolish.
© 2012-2013 by Carnegie Mellon University                                                              Page 15 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
            estimates of need time.
 Related Problems:TTS-SoS-5 SoS Testing Not Properly Scheduled

2.1.1.6 GEN-TPS-6 Testing is Postponed
 Description: Testing is postponed until late in the development schedule.
 Potential Symptoms:
 • Testing is scheduled to be performed late in the development cycle on the project master
    schedule.
 • Little or no unit or integration testing:
        is planned
        is being performed during the early and middle stages of the development cycle
 Potential Consequences:
 • There is insufficient time left in the schedule to correct any major defects found. 13               11F




 • It is difficult to show the required degree of test coverage.
 • Because so much of the system has been integrated before the beginning of testing, it is
    very difficult to find and localize defects that remain hidden within the internals of the
    system.
 Potential Causes:
 • The project is using a strictly-interpreted traditional sequential Waterfall development
    cycle.
 • Management was not able to staff the testing team early during the development cycle.
 • Management was primarily interested in system testing and did not recognize the need for
    lower-level (e.g., unit and integration) testing.
 Recommendations:
 • Prepare:
       Plan and schedule testing to be performed iteratively, incrementally, and in a parallel
       manner (i.e., agile) starting early during development.
       Provide training in incremental iterative testing.
       Incorporate iterative and incremental testing into the project’s system/software
       engineering process.
 • Enable:
       Provide adequate testing resources (staffing, tools, budget, and schedule) early during
       development.
 • Perform:
       Perform testing in an iterative, incremental, and parallel manner starting early during the
       development cycle.


13
     An interesting example of this is the Hubble telescope. Testing of the mirror’s focusing was postponed until after
     launch, resulting in an incredibly expensive repair mission.
© 2012-2013 by Carnegie Mellon University                                                            Page 16 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
    •     Verify:
             Verify in an ongoing manner (or at the very least during major project milestones) that
             testing is being performed iteratively, incrementally, and in parallel with design,
             implementation, and integration.
             Use testing metrics to verify status and ongoing progress.
    Related Problems:GEN-PRO-1 Testing and Engineering Process not Integrated

2.1.2 Stakeholder Involvement and Commitment Problems
The following testing problems are related to stakeholder involvement in and commitment to the
testing effort:
•        GEN-SIC-1 Wrong Testing Mindset
•        GEN-SIC-2 Unrealistic Testing Expectations / False Sense of Security
•        GEN-SIC-3 Lack of Stakeholder Commitment

2.1.2.1 GEN-SIC-1 Wrong Testing Mindset
    Description: Some of the testers and other testing stakeholders have the wrong testing mindset.
    Potential Symptoms:
    • Some testers and other testing stakeholdersbegin testing assumingthat the system/software
       works.
    • Testers believe that their job is to verify or “prove” that the system/software works. 14               12 F




    • Testing is used to demonstrate that the system/software works properly rather than to
       determinewhere and how it fails.
    • Only normal (“sunny day”, “happy path”, or “golden path”) behavior is being tested.
    • There is little or no testing of:
           exceptional or fault/failure tolerant(“rainy day”) behavior
           input data (e.g., range testing to identify incorrect handling of invalid input values)
    • Test inputsonly include middle of the road values rather than boundary values and corner
       cases.
    Potential Consequences:
    • There is a high probability that:
           the delivered system or software will contain a significant number of residual defects,
           especially related to abnormal behavior (e.g., exceptional use case paths)
           these defects will unacceptably reduce its reliability and robustness (e.g., error, fault,
           and failure tolerance)
    • Customer representatives, managers, and developers have a false sense of security that the
       system functions properly.


14
        Using testing to “prove” that their software works is most likely to become a problem when developers test their
        own software (e.g., with unit testing and with small cross-functional or agile teams).
© 2012-2013 by Carnegie Mellon University                                                              Page 17 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                             25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Potential Causes:
 • Testers were taught or explicitly told that their job is to verify or “prove” that the
    system/software works.
 • Developers are testing their own software15 so that there is a “conflict of interest” (i.e., build
    software that works and show that their software does not work). This is especially a
    problem with small, cross-functional development organizations/teams that “cannot afford”
    to have separate testers (i.e., professional testers who specialize in testing).
 • There was insufficient schedule allocated for testing so that there is only sufficient time to
    test the normal behavior (e.g., use case paths).
 • The organizational culture is very success oriented so that looking “too hard” for problems
    is (implicitly) discouraged.
 • Management gave the testers the strong impression that they do not want to hear any “bad”
    news (i.e., that there are any significant defects being found in the system).
 Recommendations:
 • Prepare:
       Explicitly state in the project test plan that the primary goal of testing is to:
           find defects by causing system faults and failuresrather than to demonstrate that
           there are no defects
           break the system rather than toprove that it works
 • Enable:
       Provide test training that emphasizes uncovering defects by causing faults or failures.
       Provide sufficient time in the schedule for testing beyond the basic success paths.
       Hire new testers who exhibit a strong “destructive” mindset to testing.
 • Perform:
       In addition to test cases that verify all normal behavior, emphasize looking for defects
       where they are most likely to hide (e.g., boundary values, corner cases, and input
       type/range verification). 16       1 3F




       Incentivize testers based more on the number of significant defects they uncoverthan
       merely on the number requirements “verified” or test cases ran.17
       Foster a healthy competition between developers (who seek to avoid inserting defects)
       and testers (who seek to find those defects).
 • Verify:
       Verify that the testers exhibit a testing mindset.


15
     Developers typically do their own unit level (i.e., lowest level) testing. With small, cross functional (e.g., agile)
     teams, it is becoming more common for developers to also do integration and subsystem testing.
16
     Whereas tests that verify nominal behavior are essential, testers must keep in mind that there are typically many
     more ways for the system/software under test to fail than to work properly. Also, nominal tests must remain part
     of the regression test suite even after all known defects are fixed because changes could introduce new defects
     that cause nominal behavior to fail.
17
     Take care to avoid incentivizing developers to insert defects into their own software so that they can then find
     them during testing.
© 2012-2013 by Carnegie Mellon University                                                               Page 18 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Related Problems:GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate
 Communication Concerning Testing, TTS-UNT-3 Unit Testing Considered Unimportant

2.1.2.2 GEN-SIC-2 Unrealistic Testing Expectations / False Sense of
        Security
 Description: Testers and other testing stakeholders have unrealistic testing expectations that
 generate a false sense of security.
 Potential Symptoms:
 • Testing stakeholders (e.g., managers and customer representatives) and some testers falsely
    believe that:
        Testing detects all (or even the majority of) defects. 18         14F




        Testing proves that there are no remaining defects and that the system therefore works
        as intended.
        Testing can be, for all practical purposes, exhaustive.
        Testing can be relied on for all verification. (Note that some requirements are better
        verified via analysis, demonstration, certification, and inspection.)
        Testing (if it is automated) will guarantee the quality of the tests and reduce the testing
        effort 19 15 F




 • Managers and other testing stakeholders may not understand that:
        Test automation requires specialized expertise and needs to be budgeted for the effort
        required to develop, verify, and maintain the automated tests.
        A passed test could result from a weak/incorrect test rather than a lack of defects.
        A truly successful/useful test is one that finds one or more defects, whereas a passed test
        only shows that the system worked in that single specific instance.
 Potential Consequences:
 • Testers and other testing stakeholders have a false sense of security that the system or
    software will work properly on delivery and deployment.
 • Non-testing forms of verification (e.g., analysis, demonstration, inspection, and simulation)
    are not given adequate emphasis.
 Potential Causes:
 • Testing stakeholders and testers were not exposed to research results that document the
    relatively large percentage of residual defects that typically remain after testing.
 • Testers and testing stakeholders have not been trained in verification approaches (e.g.,
    analysis, demonstration, inspection) other than testing and their relative pros and cons.
 • Project testing metrics do not include estimates of residual defects.


18
     Testing typically finds less than half of all latent defects and is not the most efficient way of detecting many
     defects.
19
     This depends on the development cycle and the volatility of the system’s requirements, architecture, design, and
     implementation.
© 2012-2013 by Carnegie Mellon University                                                           Page 19 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Recommendations:
 • Prepare:
       Collect information on the limitations of testing.
       Collect information on when and how to augment testing with other types of
       verification.
 • Enable:
       Provide basic training in verification methods including their associated strengths and
       limitations.
 • Perform:
       Train and mentor managers, customer representatives, testers, and other test
       stakeholders concerning the limits of testing:
           Testing will not detect all (or even a majority of) defects.
           No testing is truly exhaustive.
           Testing cannot prove (or demonstrate) that the system works under all combinations
           of preconditions and trigger events.
           A passed test could result from a weak test rather than a lack of defects.
           A truly successful test is one that finds one or more defects.
       Do not rely on testing for the verification of all requirements, especially architecturally-
       significant quality requirements.
       Collect, analyze, and report testing metrics that estimate the number of defectsremaining
       after testing.
 • Verify:
       Verify that testing stakeholders understand the limitations of testing.
       Verify that testing is not the only type of verification being used.
       Verify that the number of defects remaining is estimated and reported.
 Related Problems:GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate
 Communication Concerning Testing, TTS-REG-2 Regression Testing not Performed

2.1.2.3 GEN-SIC-3 Lack of Stakeholder Commitment
 Description: There is a lack of adequate stakeholder commitment to the testing effort.
 Potential Symptoms:
 • Stakeholders (especially customers and management) are not providing sufficient resources
    (e.g., people, schedule, tools, funding) for the testing effort.
 • Stakeholders are unavailable for the review of test assets such as test plans and important
    test cases.
 • Stakeholders (e.g., customer representatives) point out defects in test assets after they have
    been reviewed.
 • Stakeholders do not support testing when resources must be cut (e.g., due to schedule
    slippages and budget overruns).


© 2012-2013 by Carnegie Mellon University                                          Page 20 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                       25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Potential Consequences:
 • Testing is less effective due to inadequate resources.
 • Stakeholders (e.g., customer representatives) reject reviewed test assets.
 • The testing effort losesneeded resources when the schedule slips or the budget overruns.
 Potential Causes:
 • Stakeholders did not understand the scope, complexity, and importance of testing.
 • Stakeholders were not provided adequate estimates of the resources needed to properly
    perform testing.
 • Stakeholders wereextremely busy with other duties.
 • The overall project schedule and budget estimates were inadequate, thereby forcing cuts in
    testing.
 Recommendations:
 • Prepare:
       Convey the scope, complexity, and importance of testing to the testing stakeholders.
 • Enable:
       Provide stakeholders with adequate estimates of the resources needed to properly
       perform testing.
 • Perform:
       Officially request sufficient testing resources from the testing stakeholders.
       Obtain commitments of support for authoritative stakeholders at the beginning of the
       project.
 • Verify:
       Verify that the testing stakeholders are providing sufficient resources (e.g., people,
       schedule, tools, funding) for the testing effort.
 Related Problems:GEN-MGMT-1 Inadequate Test Resources, GEN-MGMT-5 Test Lessons
 Learned Ignored,GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate
 Communication Concerning Testing, TTS-SoS-4 Inadequate Funding for SoS Testing, TTS-
 SoS-6 Inadequate Test Support from Individual Systems

2.1.3 Management-related Testing Problems
The following testing problems are related to stakeholder involvement in and commitment to the
testing effort:
• GEN-MGMT-1 Inadequate Test Resources
• GEN-MGMT-2 Inappropriate External Pressures
• GEN-MGMT-3 Inadequate Test-related Risk Management
• GEN-MGMT-4 Inadequate Test Metrics
• GEN-MGMT-5 Test Lessons Learned Ignored



© 2012-2013 by Carnegie Mellon University                                        Page 21 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                        25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
2.1.3.1 GEN-MGMT-1 Inadequate Test Resources
 Description: Management allocates an inadequate amount of resources to testing.
 Potential Symptoms:
 • The test planning documents and schedulesfail to provide for adequate test resources such
    as:
        test time in schedule with inadequate schedule reserves
        trained and experienced testers and reviewers
        funding
        test tools and environments (e.g., integration test beds and repositories of test data)
 Potential Consequences:
 • Adequate test resources will likely not be provided to perform sufficient testing within
    schedule and budget limitations.
 • An unnecessary number of defects may make it through testing and into the deployed
    system.
 Potential Causes:
 • Testing stakeholders may not understand the scope, complexity, and importance of testing,
    and therefore its impact on the resources needed to properly perform testing.
 • Estimates of needed testing resources may not be based on any evidenced-based cost/effort
    models.
 • Resource estimates may be informally made by management without input from the testing
    organization, especially those testers who will be actually performing the testing tasks.
 • Resource estimates may be based on available resources rather than resource needs.
 • Management may believe that the testers have padded their estimates and therefore cut the
    tester’s estimates.
 • Testers and testing stakeholders may be being overly optimistic so that their informal
    estimates of needed resources are based on best case scenarios rather than most likely or
    worst case scenarios.
 Recommendations:
 • Prepare:
       Ensure that testing stakeholders understand the scope, complexity, and importance of
       testing, and therefore its impact on the resources needed to properly perform testing.
 • Enable:
       Begin test planning at project inception (e.g., at contract award or during proposal
       development).
       Train testers in the use of evidence-based cost/effort models to estimate the amount of
       testing resources needed.
 • Perform:
       Use evidenced-based cost/effort models to estimate the needed testing resources.
       Officially request sufficient testing resources from the testing stakeholders.


© 2012-2013 by Carnegie Mellon University                                         Page 22 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
        Ensure that the test planning documents, schedules, and project work breakdown
        structure (WBS) provide for adequate levels of these test resources.
        Obtain commitments of support for authoritative stakeholders at the beginning of the
        project.
 •   Verify:
        Verify that the testing stakeholders are providing sufficient resources (e.g., people,
        schedule, tools, funding) for the testing effort.
 Related Problems:GEN-SIC-3 Lack of Stakeholder Commitment, GEN-TOP-3 Inadequate
 Testing Expertise

2.1.3.2 GEN-MGMT-2 Inappropriate External Pressures
 Description: Testers are subject to inappropriate external pressures, primarily from managers.
 Potential Symptoms:
 • Managers (or possibly customers or developers) are dictating to the testers what constitutes
    a bug or a defect worth reporting.
 • Managerial pressure exists to:
        inappropriately cut corners (e.g., only perform “sunny day” testing in order to meet
        schedule deadlines
        inappropriately lower the severity and priority of reported defects
        not find defects (e.g., until after delivery because the project is so far behind schedule
        that there is no time to fix any defects found)
 Potential Consequences:
 • If the testers yield to this pressure, then the test metrics do not accurately reflect either the
    true state of the system / software or the status of the testing process.
 • The delivered system or software contains an unacceptably large number of residual
    defects.
 Potential Causes:
 • The project is significantly behind schedule and/or over budget.
 • There is insufficient time until the delivery/release date to fix a significant number of
    defects that were found via testing.
 • The project is in danger of being cancelled due to lack of performance.
 • Management is highly risk adverse and thereforedid not want to officially label any testing
    risk as a risk.
 Recommendations:
 • Prepare:
      Establish criteria for determining the priority and severity of reported defects.
 • Enable:
      Ensure that trained testers determine what constitutes a bug or a defect worth reporting.
      Place the manager of the testing organization at the same or higher level as the project

© 2012-2013 by Carnegie Mellon University                                             Page 23 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
           manager in the organizational hierarchy (i.e., have the test manager report
           independently of the project manager).20
 •     Perform:
           Support testers when they oppose any inappropriate managerial pressure that would
           have them violate their professional ethics.
           Customer representatives must insist on proper testing.
 •     Verify:
           Verify that the testers are the ones who decide what constitutes a reportable defect.
           Verify that the testing manager reports independently of the project manager.
 Related Problems:GEN-SIC-1 Wrong Testing Mindset, GEN-TOP-1 Lack of Independence

2.1.3.3 GEN-MGMT-3 Inadequate Test-related Risk Management
 Description: There are too few test-related risks identified in the project’s official risk
 repository. 21F




 Potential Symptoms:
 • Managers are highly risk adverse, treating risk as if it were a “four letter word”.                 17 F




 • Because adding risks to the risk repository is looked on as a symptom of management
    failure, risks (including testing risks) are mislabeled as issues or concerns so that they need
    not be reported as an official risk.
 • There arefew if anytest-related risks identified in the project’s official risk repository.
 • The number of test-related risks is unrealistically low.
 • The identified test-related risks have inappropriately low probabilities, low harm severities,
    and low priorities.
 • The identified test risks have no:
        associated risk mitigation approaches
        one assigned as being responsible for the risk
 • The test risks are never updated (e.g., additions or modification) over the course of the
    project.
 • Testing risks are not addressed in either the test plan(s) or the risk management plan.
 Potential Consequences:
 • Testing risks are not reported.
 • Management and acquirer representatives are unaware of their existence.
 • Testing risks are not being managed.
 • The management of testing risks is not given sufficiently high priority.
 Potential Causes:
 • Management ishighly risk adverse.


20
     Note that this will only help if the test manager is not below the manager applying improper pressure.
21
     These potential testing problems can be viewed as generic testing risks.
© 2012-2013 by Carnegie Mellon University                                                           Page 24 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                           25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •     Managers strongly communicate their preference that only a small number of the most
       critical risks be entered into the project risk repository.
 •     The people responsible for risk management and managing the risk repository have never
       been trained or exposed to the many potential test-related risks (e.g., those associated with
       the commonly occurring testing problems addressed in this document).
 •     The risk management process strongly emphasizes system-specific or system-level (as
       opposed to software-level) risks and tends to not address any development activity risks
       (such as those associated with testing).
 •     It is early in the development cycle before sufficient testing has begun.
 •     There have been few if any evaluations of the testing process.
 •     There has been little if any oversight of the testing process.
 Recommendations:
 • Prepare:
       Determine management’s degree of risk aversion and attitude regarding inclusion of
       risks in the project risk repository.
 • Enable:
       Ensure that the people responsible for risk management and managing the risk
       repository are aware of the many potential test-related risks.
 • Perform:
       Identify test-related risks and incorporate them into the official project risk repository.
       Provide test-related risks with realistic probabilities, harm severities, and priorities.
 • Verify:
       Verify that the risk repository contains an appropriate number of testing risks.
       Verify that there is sufficient management and quality assurance oversight and
       evaluation of the testing process.
 Related Problems: GEN-SIC-2 Unrealistic Testing Expectations / False Sense of Security

2.1.3.4 GEN-MGMT-4 Inadequate Test Metrics
 Description: Insufficient test metrics are being produced, analyzed, and reported.
 Potential Symptoms:
 • Insufficient or no test metrics are being produced, analyzed, and reported.
 • The primary test metrics (e.g., number of tests 22, number of tests needed to meet adequate
                                                                18F




    or required test coverage levels, number of tests passed/failed, number of defects found)
    show neither the productivity of the testers nor their effectiveness at finding defects (e.g.,




22
     Note that the number of tests metric does not indicate the effort or complexity of identifying, analyzing, and
     fixing defects.
© 2012-2013 by Carnegie Mellon University                                                             Page 25 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                    25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
       defects found per test or per day).
 •     The number of latent undiscovered defects remaining is not being estimated (e.g., using
       COQUALMO 23).   19 F




 •     Management measures tester productivity strictly in terms of defects found per unit time,
       ignoring the importance or severity of the defects found.
 Potential Consequences:
 • Managers, testers, and other stakeholders in testing do not accurately know the quality of
    testing, the importance of the defects being found, or the number of residual defects in the
    delivered system or software.
 • Managers do not know the productivity of the testers and their effectiveness at finding of
    important defects, thereby making it difficult to improve the testing process.
 • Testers concentrate on finding lots of (unimportant) defects rather than finding critical
    defects (e.g., those with mission-critical, safety-critical, or security-critical ramifications).
 • Customer representatives, managers, and developers have a false sense of security that the
    system functions properly.
 Potential Causes:
 • Project management (including the managers/leaders of test organizations/teams) are not
    familiar with the different types of testing metrics (e.g., quality, status, and productivity)
    that could be useful.
 • Metrics collection, analysis, and reporting is at such a high level that individual disciplines
    (such as testing) are rarely assigned more than one or two highly-generic metrics (e.g.,
    “Inadequate testing is a risk”).
 • Project management (and testers) are only aware of backward looking metrics (e.g., defects
    found and fixed) as opposed to forward looking metrics (e.g., residual defects remaining to
    be found).
 Recommendations:
 • Prepare:
       Provide testers and testing stakeholders with basic training in metrics with an emphasis
       on test metrics.
 • Enable:
       Incorporate a robust metrics program in the test plan that covers leading indicators.
       Emphasize the finding of important defects.
 • Perform:
       Consider using some of the following representative examples of useful testing metrics:
           number of defects found per test (test effectiveness metric)
           number of defects found per tester day (tester productivity metric)
           number of defects that slip through each verification milestone / inch pebble (e.g.,


23
     COQUALMO (COnstructiveQUALity Model is an estimation model that can be used for predicting the number
     of residual defects/KSLOC (thousands of source lines of code) or defects/FP (Function Point) in a software
     product.
© 2012-2013 by Carnegie Mellon University                                                      Page 26 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
               reviews, inspections, tests) 24  20F




               estimated number of latent undiscovered defects remaining in the delivered system
               (e.g., estimated using COQUALMO)
          Regularly collect, analyze, and report an appropriate set of testing metrics.
 •     Verify:
          Important: Evaluate and maintain visibility into the as-performed testing process to
          ensure that it does not become metrics-driven.
          Watch out for signs that testers worry more about looking good (e.g., by concentrating
          on only the defects that are easy to find) than on finding the most important defects.
          Verify that sufficient testing metrics are collected, analyzed, and reported.
 Related Problems: None

2.1.3.5 GEN-MGMT-5 Test Lessons Learned Ignored
 Description: Lessons that are learned regarding testing are not placed into practice.
 Potential Symptoms:
 • Management, the test teams, or customer representatives ignore lessons learned during
    previous projects or during the testing of previous increments of the system under test.
 Potential Consequences:
 • The test processes is not being continually improved.
 • The same problems continue to occur.
 • Customer representatives, managers, and developers have a false sense of security that the
    system functions properly.
 Potential Causes:
 • Lessons learned were not documented.
 • The capturing of lessons learned was being postponed until after the project was over when
    the people who have learned the lessons were no longer available, having scattered to new
    projects.
 • The only usage of lessons learned is informal and solely based on the experience that the
    individual developers and testers bring to new projects.
 • Lessons learned from previous projects are not reviewed before starting new projects.
 Recommendations:
 • Prepare:
      Make the documentation of lessons learned an explicit part of the testing process.
      Review previous lessons learned as an initial step in determining the testing process.
 • Enable:
      Capture (and implement) lessons learned as they are learned.

24
     For example, what are the percentages of defects that manage to slip by architecture reviews, design reviews,
     implementation inspections, unit testing, integration testing, and system testingwithout being detected?
© 2012-2013 by Carnegie Mellon University                                                           Page 27 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                         25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
         Do not wait until a project postmortem when project staff member’s memories are
         fading and they are moving (have moved) on to their next project.
 •   Perform:
         Incorporate previously learned testing lessons learned into the current testing process
         and test plans.
 •   Verify:
         Verify that previously learned testing lessons learned have been incorporated into the
         current testing process and test plans.
         Verify that testing lessons learned are capture (and implemented) as they are learned.
 Related Problems:GEN-SIC-3 Lack of Stakeholder Commitment

2.1.4 Test Organization and Professionalism Problems
The following testing problems are related to the test organization and the professionalism of the
testers:
• GEN-TOP-1 Lack of Independence
• GEN-TOP-2 Unclear Testing Responsibilities
• GEN-TOP-3 Inadequate Testing Expertise

2.1.4.1 GEN-TOP-1 Lack of Independence
 Description: The test organization or team lacks adequate independence to enable them to
 properly perform their testing tasks.
 Potential Symptoms:
 • The manager of the test organization reports to the development manager.
 • The lead of the project test team reports to the project manager.
 • The test organization manager or test team leader does not have sufficient authority to raise
    and manage testing-related risks.
 Potential Consequences:
 • A lack of sufficient independence forces the test organization or team to select an
    inappropriate test process or tool.
 • Members of the test organization or teamare intimidated into withholding objective and
    timely information from the testing stakeholders.
 • The test organization or team has insufficient budget and schedule to be effective.
 • The project manager inappropriately overrules or pressures the testers to violate their
    principles.
 Potential Causes:
 • Management does not see the value or need for independent reporting.
 • Management does not see the similarity between quality assurance and testing with regard
    to independence.


© 2012-2013 by Carnegie Mellon University                                           Page 28 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                       25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Recommendations:
 • Prepare:
       Determine reporting structures
       Identify potential independence problems
 • Enable:
       Clarify to testing stakeholders (especially project management) the value of independent
       reporting for the test organization manager and project test team leader.
 • Perform:
       Ensure that the test organization or team has:
           Technical independence so that they can select the most appropriate test process and
           tools for the job
           Managerial independence so that they can provide objective and timely information
           about the test program and results without fear of intimidation due to business
           considerations or project-internal politics
           Financial independence so that their budget (and schedule) is sufficient to enable
           them to be effective and efficient
       Have the test organization manager report at the same or higher level as the
       development organization manager.
       Have the project test team leader report independently of the project manager to the test
       organization manager or equivalent (e.g., quality assurance manager).
 • Verify:
       Verify that the test organization manager reports at the same or higher level as the
       development organization manager.
       Verify that project test team leader report independently of the project manager to the
       test organization manager or equivalent (e.g., quality assurance manager).
 Related Problems:GEN-MGMT-2 Inappropriate External Pressures

2.1.4.2 GEN-TOP-2 Unclear Testing Responsibilities
 Description: The testing responsibilities are unclear.
 Potential Symptoms:
 • The test planning documents does not adequately address testing responsibilities in terms of
    which organizations, teams, and people:
        will perform which types of testing on what [types of] components
        are responsible for procuring, building, configuring, and maintaining the test
        environments
        are the ultimate decision makers regarding testing risks, test completion criteria, test
        completion, and the status/priority of defects
 Potential Consequences:
 • Certain tests are not performed, while other tests are performed redundantly by multiple
    organizations or people.

© 2012-2013 by Carnegie Mellon University                                        Page 29 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                        25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •   Incomplete testing enables some defects to make it through testing and into the deployed
     system.
 •   Redundant testing wastes test resources and cause testing deadlines to slip.
 Potential Causes:
 • The test plan template did not clearly address responsibilities.
 • The project team is very small with everyone wearing multiple hats and therefore
    performing testing on an as available / as needed basis.
 Recommendations:
 • Prepare:
       Obtain documents describing current testing responsibilities
       Identify potential testing responsibility problems (e.g., missing, vague responsibilities)
 • Enable:
       Obtain organizational agreement as to the testing responsibilities.
 • Perform:
       Clearly and completely document the responsibilities for testing in the test plans as well
       as the charters of the teams who will be performing the tests.
       Managers should clearly communicate these responsibilities to the relevant
       organizations and people.
 • Verify:
       Verify that testing responsibilities are clearly and completely documented in the test
       plans as well as the charters of the teams who will be performing the tests.
 Related Problems:GEN-TPS-2 Incomplete Test Planning, GEN-PRO-7 Too Immature for
 Testing, GEN-COM-2 Inadequate Test Documentation, TTS-SoS-3 Unclear SoS Testing
 Responsibilities

2.1.4.3 GEN-TOP-3 Inadequate Testing Expertise
 Description: Too many people have inadequate testing expertise, experience, and training.
 Potential Symptoms:
 • Testers and/or those who oversee them (e.g., managers and customer representatives) have
    inadequate testing expertise, experience, or training.
 • Developers who are not professional testers have been tasked to perform testing.
 • Little or no classroom or on-the-job training in testing has taken place.
 • Testing is ad hoc without any proper process.
 • Industry best practices are not followed.
 Potential Consequences:
 • Testing is not effective in detecting defects, especially the less obvious ones.
 • There areunusually large numbers of false positive and false negative test results.
 • The productivity of the testers is needlessly low.


© 2012-2013 by Carnegie Mellon University                                         Page 30 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •     There is a high probability that the system or software will be delivered late with an
       unacceptably large number of residual defects.
 •     During development, managers, developers, and customer representatives have a false sense
       of security that the system functions properly. 25       21 F




 Potential Causes:
 • Management did not understand the scope and complexity of testing.
 • Management did not understand the required qualifications of a professional tester.
 • There was insufficient funding to hire fully qualified professional testers.
 • The project team is very small with everyone wearing multiple hats and therefore
    performing testing on an as available / as needed basis.
 • An agile development method is being followed that emphasizes cross functional
    development teams.
 Recommendations:
 • Prepare:
       Provide proper test processes including procedures, standards, guidelines, and templates
       for On-The-Job training.
       Ensure that the required qualifications of a professional tester are documented in the
       tester job description.
 • Enable:
       Convey the required qualifications of the different types of testers to those technically
       evaluating prospective testers.
       Provide appropriate amounts of test training (both classroom and on-the-job) for both
       testers and those overseeing testing.
       Ensure that the testers who will be automating testing have the necessary specialized
       expertise and training. 26      22F




       Obtain independent support for those overseeing testing.
 • Perform:
       Hire full time (i.e., professional) testers who have sufficient expertise and experience in
       testing.
       Use an independent test organization staffed with experienced trained testers for
       system/acceptance testing, whereby the head of this organization is at the same (or
       higher) level as the project manager.
 • Verify:
       Verify that those technically evaluating prospective testers understand the required
       qualifications of the different types of testers.
       Verify that the testers have adequate testing expertise, experience, and training.


25
     This false sense of security is likely to be replaced by a sense of panic when the system begins to frequently fail
     operational testing or real-world usage after deployment.
26
     Note that these recommendations apply, regardless of whether the project uses separate testing teams or cross
     functional teams including testers.
© 2012-2013 by Carnegie Mellon University                                                              Page 31 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                        25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 Related Problems:GEN-MGMT-1 Inadequate Test Resources

2.1.5 Test ProcessProblems
The following testing problems are related to the processes and techniques being used to perform
testing:
• GEN-PRO-1 Testing and Engineering Process not Integrated
• GEN-PRO-2 One-Size-Fits-All Testing
• GEN-PRO-3 Inadequate Test Prioritization
• GEN-PRO-4 Functionality Testing Overemphasized
• GEN-PRO-5 Black-boxSystem Testing Overemphasized
• GEN-PRO-6 White-boxUnit and Integration Testing Overemphasized
• GEN-PRO-7 Too Immature for Testing
• GEN-PRO-8 Inadequate Test Evaluations
• GEN-PRO-9 Inadequate Test Maintenance

2.1.5.1 GEN-PRO-1 Testing and Engineering Process Not Integrated
 Description: The testing process is not adequately integrated into the overall system/software
 engineering process.
 Potential Symptoms:
 • There is little or no discussion of testing in the system/software engineering documentation:
    System Engineering Master Plan (SEMP), Software Development Plan (SDP), Work
    Breakdown Structure (WBS), Project Master Schedule (PMS), and system/software
    development cycle (SDC).
 • All or most of the testing is being done as a completely independent activity performed by
    staff members who are not part of the project engineering team.
 • Testing istreated as a separate specialty-engineering activity with only limited interfaces
    with the primary engineering activities.
 • Testers are not included in the requirements teams, architecture teams, and any cross
    functional engineering teams.
 Potential Consequences:
 • There is inadequate communication between testers and other system/software engineers
    (e.g., requirements engineers, architects, designers, and implementers).
 • Few testing outsiders understand the scope, complexity, and importance of testing.
 • Testers do not understand the work being performed by other engineers.
 • There are incompatibilities between outputs and associated inputs at the interfaces between
    testers and other engineers.
 • Testing is less effective and takes longer than necessary.
 Potential Causes:
 • Testers are not involved in the determination and documentation of the overall engineering

© 2012-2013 by Carnegie Mellon University                                         Page 32 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
     process.
 •   The people determining and documenting the overall engineering process do not have
     significant testing expertise, training, or experience.
 Recommendations:
 • Prepare:
       ObtainSEMP, SDP, WBS, and project master schedule
 • Enable:
       Provide a top-level briefing/training in testing to the chief system engineer, system
       architect, system/software process engineer.
 • Perform:
       Have test subject matter experts and project testers collaborate closely with the project
       chief engineer / technical lead and process engineer when they develop the engineering
       process descriptions and associated process documents.
       In addition to being in test plans such as the Test and Evaluation Master Plan (TEMP)
       or Software Test Plan (STP) as well as in other process documents, provide high-level
       overviews of testing in the SEMP(s) and SDP(s).
       Document how testing is integrated into the system/software development/life cycle,
       regardless of whether it is traditional waterfall, agile (iterative, incremental, and
       parallel), or anything in between.
           For example, document handover points in the development cycle when testing
           input and output work products are delivered from one project organization or group
           to another.
       Incorporate testing into the Project Master Schedule.
       Incorporate testing into the project’s work breakdown structure (WBS).
 • Verify:
       Verify that testing is incorporated into the project’s:
           system/software engineering process
           SEMP and SDP
           WBS
           PMS
           SDC
 Related Problems:GEN-COM-4 Inadequate Communication Concerning Testing

2.1.5.2 GEN-PRO-2 One-Size-Fits-All Testing
 Description: All testing is to be performed to the same level of rigor, regardless of its
 criticality.
 Potential Symptoms:
 • The test planning documents may contain only generic boilerplate rather than appropriate
    system-specific information.


© 2012-2013 by Carnegie Mellon University                                            Page 33 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                            25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •   Mission-, safety-, and security-critical software may not be required to be tested more
     completely and rigorously than other less-critical software.
 •   Only general techniques suitable for testing functional requirements/behavior may be
     documented; for example, there is no description of the special types of testing needed for
     quality requirements (e.g., availability, capacity, performance, reliability, robustness, safety,
     security, and usability requirements).
 Potential Consequences:
 • Mission-, safety-, and security-critical software may not be adequately tested.
 • When there are insufficient resources to adequately test all of the software, some of these
    limited resources may be misapplied to lower-priority software instead of being
    concentrated on the testing of more critical capabilities.
 • Some defects may not be found, and an unnecessary number of these defects may make it
    through testing and into the deployed system.
 • The system may not be sufficiently safe or secure.
 Potential Causes:
 • Test plan templates and content/format standards may be incomplete and may not address
    the impact of mission/safety/security criticality on testing.
 • Test engineers may not be familiar with the impact of safety and security on testing (e.g.,
    the higher level of testing rigor required to achieve accreditation and certification.
 • Safety and security engineers may not have input into the test planning process.
 Recommendations:
 • Prepare:
       Provide training to those writing system/software development plans and
       system/software test plans concerning the need to include project-specific testing
       information including potential content
       Tailor the templates for test plans and development methods to address the need for
       project/system-specific information.
 • Enable:
       Update (if needed) the templates for test plans and development methods to address the
       type, completeness, and rigor
 • Perform:
       Address in the system/software test plans and system/software development plans:
           Difference in testing types/degrees of completeness and rigor, etc. as a function of
           mission/safety/security criticality.
           Specialty engineering testing methods and techniques for testing the quality
           requirements (e.g., penetration testing for security requirements).
       Test mission-, safety-, and security-critical software more completely and rigorously
       than other less-critical software.
 • Verify:
       Verify that the completeness, type, and rigor of testing:
           is addressed in the system/software development plans and system/software test

© 2012-2013 by Carnegie Mellon University                                             Page 34 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                              25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
                 plans
                 are a function of the criticality of the system/subsystem/software being tested
                 are sufficient based on the degree of criticality of the system/subsystem/software
                 being tested
 Related Problems:GEN-PRO-3 Inadequate Test Prioritization

2.1.5.3 GEN-PRO-3 Inadequate Test Prioritization
 Description: Testing is not being adequately prioritized.
 Potential Symptoms:
 • All types of testing may have the same priority.
 • All test cases for the system or one of its subsystems mayhave the same priority.
 • The most important tests of a given type may not be being performed first.
 • Testing may begin with the easy testing of “low-hanging fruit”.
 • Difficult testing or the testing of high risk functionality/components may be being
    postponed until late in the schedule.
 • Testing ignores the order of integration and delivery; for example, unit testing before
    integration before system testing and the testing of the functionality of current the current
    increment before the testing of future increments. 27             23 F




 Potential Consequences:
 • Limited testing resources may be wasted or ineffectively used.
 • Some of the most critical defects (in terms of failure consequences) may not be discovered
    until after the system/software is delivered and placed into operation.
 • Specifically, defects with mission, safety, and security ramifications may not be found.
 Potential Causes:
 • The system/software test plans and testing parts of the system/software development plans
    do not address the prioritization of the testing.
 • Any prioritization of testing is not used to schedule testing.
 • Evaluations of the individual testers and test teams:
        are based [totally] on number of tests performed per unit time
        ignore the importance of capabilities, subsystems, or defects found
 Recommendations:
 •     Prepare:
          Update the following documents to address the prioritization of testing:
              system/software test plans
              testing parts of the system/software development plans


27
     While the actual testing of future capabilities must wait until those capabilities are delivered to the testers, one
     can begin to develop black-box test cases based on requirements allocated to future builds (i.e., tests that are
     currently not needed and may never be needed if the associated requirements change or are deleted).
© 2012-2013 by Carnegie Mellon University                                                                Page 35 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                             25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
         Define the different types and levels/categories of criticality
 •   Enable:
         Perform a mission analysis to determine the mission-criticality of the different
         capabilities and subsystems
         Perform a safety (hazard) analysis to determine the safety-criticality of the different
         capabilities and subsystems
         Perform a security (threat) analysis to determine the safety-criticality of the different
         capabilities and subsystems
 •   Perform:
         Work with the developers, management, and stakeholders to prioritize testing according
         to the:
              criticality (e.g., mission, safety, and security) of the system/subsystem/software
              being tested
              potential importance of the potential defects identified via test failure
              probability that the test is likely to elicit important failures
              potential level of risk incurred if the defects are not identified via test failure
              delivery schedules
              integration/dependency order
         Use prioritization of testing to schedule testing so that thehighest priority tests are tested
         first.
         Collect test metrics based on the number and importance of the defects found
         Base the performance evaluations of the individual testers and test teams on the test
         effectiveness (e.g., the number and importance of defects found) rather than merely on
         the number of tests written and performed.
 •   Verify:
         Evaluate the system/software test plans and the testing parts of the system/software
         development plans to verify that they properly address test prioritization.
         Verify that mission, safety, and security analysis have been performed and the results
         are used to prioritize testing.
         Verify that testing is properly prioritized.
         Verify that testing is in fact being performed in accordance with the prioritization.
         Verify that testing metrics address test prioritization.
         Verify that performance evaluations are based on
 Related Problems:GEN-PRO-2 One-Size-Fits-All Testing

2.1.5.4 GEN-PRO-4 Functionality Testing Overemphasized
 Description: There is an over emphasis on testing functionality as opposed to quality
 characteristics, data, and interfaces.
 Potential Symptoms:


© 2012-2013 by Carnegie Mellon University                                              Page 36 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •     The vast majority of testing may be concerned with verifying functional behavior.
 •     Little unit or testing may be being performed to verify adequate levels of the quality
       characteristics (e.g., availability, reliability, robustness, safety, security, and usability).
 •     Inadequate levels of various quality characteristics and their attributes are onlybeing
       recognized after the system has been delivered and placed into operation.
 Potential Consequences:
 • The system may not have adequate levels of important quality characteristics and thereby
    fail to meet all of its quality requirements.
 • Failures to meet data and interface requirements (e.g., due to a lack of verification of input
    data and message contents) may not be recognized until late during integration or after
    delivery.
 • Testers and developers may have a harder time localizing the defects that the system tests
    reveal.
 • The system or software may be delivered late and fail to meet an unacceptably large number
    of non-functional requirements.
 Potential Causes:
 • The test plans and process documents do not adequately address the testing of non-
    functional requirements.
 • There are no process requirements (e.g., in the development contract) mandating the
    specialized testing of non-functional requirements.
 • Managers, developers, and or testers believe:
        Testing other types of requirements (i.e., data, interface, quality, and
        architecture/design/implementation/configuration constraints) is too hard.
        Testing the non-functional requirements will take too long.28
        The non-functional requirements are not as important as the functional requirements.
        Testing the non-functional testing will naturally occur as a byproduct of the testing of
        the functional requirements.29
 • The other types of requirements (especially quality requirements) are:
        poorly specified (e.g., “The system shall be secure.” or “The system shall be easy to
        use.”)
        not specified
        therefore not testable
 • Functional testing may be the only testing that is mandated by the development contract and
    therefore the testing of the non-functional requirements is out of scope or unimportant to the
    acquisition organization.
 Recommendations:


28
     Note that adequately testing quality requirements requires significantly more time to prepare for and perform that
     typical functional requirements.
29
     Note that this can be largely true for some of the non-functional requirements (e.g., interface requirements and
     performance requirements).
© 2012-2013 by Carnegie Mellon University                                                            Page 37 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
 •   Prepare:
         Adequately address the testing of non-functional requirements in the test plans and
         process documents.
         Include process requirements mandating the specialized testing of non-functional
         requirements in the contract.
 •   Enable:
         Ensure that managers, developers, and or testers understand the importance of testing
         non-functional requirements as well as conformance to the architecture and design (e.g.,
         via whitebox testing).
 •   Perform:
         Adequately perform the other types of testing.
 •   Verify:
         Verify that the managers, developers, and or testers understand the importance of testing
         non-functional requirements and conformance to the architecture and design.
         Have quality engineers verify that the testers are testing the quality, data, and interface
         requirements as well as the architecture/design/implementation/configuration
         constraints.
         Review the test plans and process documents to ensure that they adequately address the
         testing of non-functional behavior.
         Measure, analyze, and report the types of non-functional defects and when they are
         being detected.
 Related Problems: None

2.1.5.5 GEN-PRO-5Black-boxSystem Testing Overemphasized
 Description: There is an over emphasis on black-box system testing for requirements
 conformance.
 Potential Symptoms:
 • The vast majority of testing isoccurring at the system level for purposes of verifying
    conformance to requirements.
 • There is very little white-box unit and integration testing.
 • System testing is detecting many defects that could have been more easily identified during
    unit or integration testing.
 • Similar residual defects may also be causing faults and failures after the system has been
    delivered and placed into operation.
 Potential Consequences:
 • Defects that could have been found during unit or integration testing are harder to detect,
    localize, analyze, and fix.
 • System testing is unlikely to be completed on schedule.
 • It is harder to develop sufficient system-level tests to meet code coverage criteria.
 • The system or software may be delivered late with an unacceptably large number of

© 2012-2013 by Carnegie Mellon University                                           Page 38 of 111
Common Testing Problems: Pitfalls to Prevent and Mitigate                          25 January 2013
Descriptions, Symptoms, Consequences, Causes, and Recommendations
    residual defects that will only rarely be executed and thereby cause faults or failures.
 Potential Causes:
 • The test plans and process documents do not adequately address unit and integration testing.
 • There are no process requirements (e.g., in the development contract) mandating unit and
    integration testing.
 • The developers believe that blackbox system test is all that is necessary to detect the
    defects.
 • Developers believe that testing is totally the responsibility of the independent test team,
    which is only planning on performing system-level testing.
 • The schedule does not contain adequate time for unit and integration testing. Note that this
    may really be an under emphasis of unit and integration testing rather than an overemphasis
    on system testing.
 • Independent testers rather than developers are performing the testing.
 Recommendations:
 • Prepare:
       Adequately address in the test plans, test process documents, and contract:
           whitebox and graybox testing
           unit and integration testing
 • Enable:
       Ensure that managers, developers, and or testers understand the importance these lower-
       level types of testing.
       Use a test plan template or content and format standard that addresses these lower-level
       types of testing.
 • Perform:
       Increase the amount and effectiveness of these lower-level types of testing.
 • Verify:
       Review the test plans and process documents to ensure that they adequately address
       these lower-level types of tests.
       Verify that the managers, developers, and or testers understand the importance of these
       lower-level types of testing.
       Have quality engineers verify that the testers are actually performing these lower-level
       types of testing and at an appropriate percentage of total tests.
       Review the test plans and process documents to ensure that they adequately address
       lower-level testing.
       Measure the number of defects slipping past unit and integration testing.
 Related Problems:GEN-PRO-6White-box Unit and Integration Testing Overemphasized




© 2012-2013 by Carnegie Mellon University                                           Page 39 of 111
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate
Common Testing Problems – Pitfalls to Prevent and Mitigate

Más contenido relacionado

La actualidad más candente

Testing throughout the software life cycle & statistic techniques
Testing throughout the software life cycle & statistic techniquesTesting throughout the software life cycle & statistic techniques
Testing throughout the software life cycle & statistic techniquesNovika Damai Yanti
 
Exploratory Testing
Exploratory TestingExploratory Testing
Exploratory Testingnazeer pasha
 
ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2onsoftwaretest
 
7. evalution of interactive system
7. evalution of interactive system7. evalution of interactive system
7. evalution of interactive systemKh Ravy
 
HCLT Whitepaper: Landmines of Software Testing Metrics
HCLT Whitepaper: Landmines of Software Testing MetricsHCLT Whitepaper: Landmines of Software Testing Metrics
HCLT Whitepaper: Landmines of Software Testing MetricsHCL Technologies
 
Usability Testing Fundamentals
Usability Testing FundamentalsUsability Testing Fundamentals
Usability Testing Fundamentalsdebcook
 
User Experiments in Human-Computer Interaction
User Experiments in Human-Computer InteractionUser Experiments in Human-Computer Interaction
User Experiments in Human-Computer InteractionDr. Arindam Dey
 
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010TEST Huddle
 
ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2Chandukar
 
'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer
'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer
'Architecture Testing: Wrongly Ignored!' by Peter ZimmererTEST Huddle
 
Testing 1 - the Basics
Testing 1 - the BasicsTesting 1 - the Basics
Testing 1 - the BasicsArleneAndrews2
 
Evaluation techniques in HCI
Evaluation techniques in HCIEvaluation techniques in HCI
Evaluation techniques in HCIsawsan slii
 
Testing 3 test design techniques
Testing 3 test design techniquesTesting 3 test design techniques
Testing 3 test design techniquesMini Marsiah
 
Testing throughout the software life cycle (test types)
Testing throughout the software life cycle (test types)Testing throughout the software life cycle (test types)
Testing throughout the software life cycle (test types)tyas setyo
 
Testing 1 static techniques
Testing 1 static techniquesTesting 1 static techniques
Testing 1 static techniquesMini Marsiah
 

La actualidad más candente (20)

Testing throughout the software life cycle & statistic techniques
Testing throughout the software life cycle & statistic techniquesTesting throughout the software life cycle & statistic techniques
Testing throughout the software life cycle & statistic techniques
 
Exploratory Testing
Exploratory TestingExploratory Testing
Exploratory Testing
 
Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011Testing Experience Magazine Vol.14 June 2011
Testing Experience Magazine Vol.14 June 2011
 
ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2ISTQB, ISEB Lecture Notes- 2
ISTQB, ISEB Lecture Notes- 2
 
7. evalution of interactive system
7. evalution of interactive system7. evalution of interactive system
7. evalution of interactive system
 
HCLT Whitepaper: Landmines of Software Testing Metrics
HCLT Whitepaper: Landmines of Software Testing MetricsHCLT Whitepaper: Landmines of Software Testing Metrics
HCLT Whitepaper: Landmines of Software Testing Metrics
 
Fundamentals of Testing Section 1/6
Fundamentals of Testing   Section 1/6Fundamentals of Testing   Section 1/6
Fundamentals of Testing Section 1/6
 
Usability Testing Fundamentals
Usability Testing FundamentalsUsability Testing Fundamentals
Usability Testing Fundamentals
 
User Experiments in Human-Computer Interaction
User Experiments in Human-Computer InteractionUser Experiments in Human-Computer Interaction
User Experiments in Human-Computer Interaction
 
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010
Peter Zimmerer - Passion For Testing, By Examples - EuroSTAR 2010
 
ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2ISTQB Foundation - Chapter 2
ISTQB Foundation - Chapter 2
 
'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer
'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer
'Architecture Testing: Wrongly Ignored!' by Peter Zimmerer
 
CTFL Module 01
CTFL Module 01CTFL Module 01
CTFL Module 01
 
Software Testing
Software TestingSoftware Testing
Software Testing
 
Human Computer Interaction Evaluation
Human Computer Interaction EvaluationHuman Computer Interaction Evaluation
Human Computer Interaction Evaluation
 
Testing 1 - the Basics
Testing 1 - the BasicsTesting 1 - the Basics
Testing 1 - the Basics
 
Evaluation techniques in HCI
Evaluation techniques in HCIEvaluation techniques in HCI
Evaluation techniques in HCI
 
Testing 3 test design techniques
Testing 3 test design techniquesTesting 3 test design techniques
Testing 3 test design techniques
 
Testing throughout the software life cycle (test types)
Testing throughout the software life cycle (test types)Testing throughout the software life cycle (test types)
Testing throughout the software life cycle (test types)
 
Testing 1 static techniques
Testing 1 static techniquesTesting 1 static techniques
Testing 1 static techniques
 

Destacado

Communication at workplace
Communication  at workplaceCommunication  at workplace
Communication at workplaceRamiDardook
 
Final workshop ppt
Final workshop pptFinal workshop ppt
Final workshop pptmarcia415
 
Better Software Classic Testing Mistakes
Better Software Classic Testing MistakesBetter Software Classic Testing Mistakes
Better Software Classic Testing Mistakesnazeer pasha
 
How To Improve Communication Skill
How To Improve  Communication  SkillHow To Improve  Communication  Skill
How To Improve Communication SkillVijay Shinde
 
Importance of Software testing in SDLC and Agile
Importance of Software testing in SDLC and AgileImportance of Software testing in SDLC and Agile
Importance of Software testing in SDLC and AgileChandan Mishra
 
Soft Skills Presentation
Soft Skills PresentationSoft Skills Presentation
Soft Skills PresentationStephanie Rule
 

Destacado (8)

Communication at workplace
Communication  at workplaceCommunication  at workplace
Communication at workplace
 
Final workshop ppt
Final workshop pptFinal workshop ppt
Final workshop ppt
 
Better Software Classic Testing Mistakes
Better Software Classic Testing MistakesBetter Software Classic Testing Mistakes
Better Software Classic Testing Mistakes
 
Improve Communications in the Workplace
Improve Communications in the WorkplaceImprove Communications in the Workplace
Improve Communications in the Workplace
 
How To Improve Communication Skill
How To Improve  Communication  SkillHow To Improve  Communication  Skill
How To Improve Communication Skill
 
Importance of Software testing in SDLC and Agile
Importance of Software testing in SDLC and AgileImportance of Software testing in SDLC and Agile
Importance of Software testing in SDLC and Agile
 
Agile scrum roles
Agile scrum rolesAgile scrum roles
Agile scrum roles
 
Soft Skills Presentation
Soft Skills PresentationSoft Skills Presentation
Soft Skills Presentation
 

Similar a Common Testing Problems – Pitfalls to Prevent and Mitigate

Quality Assessment Handbook
Quality Assessment HandbookQuality Assessment Handbook
Quality Assessment HandbookMani Nutulapati
 
Risk based testing a new case study
Risk based testing   a new case studyRisk based testing   a new case study
Risk based testing a new case studyBassam Al-Khatib
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationAadityaSharma884161
 
Ch cie gra - stress-test-diffusion-model-and-scoring-performance
Ch cie   gra - stress-test-diffusion-model-and-scoring-performanceCh cie   gra - stress-test-diffusion-model-and-scoring-performance
Ch cie gra - stress-test-diffusion-model-and-scoring-performanceC Louiza
 
Dynamic Stress Test diffusion model and scoring performance
Dynamic Stress Test diffusion model and scoring performanceDynamic Stress Test diffusion model and scoring performance
Dynamic Stress Test diffusion model and scoring performanceZiad Fares
 
Software testing
Software testingSoftware testing
Software testingthaneofife
 
Problem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMIProblem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMIAndrew Leong
 
Problem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMIProblem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMITQMI
 
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010TEST Huddle
 
Software Engineering (Testing Activities, Management, and Automation)
Software Engineering (Testing Activities, Management, and Automation)Software Engineering (Testing Activities, Management, and Automation)
Software Engineering (Testing Activities, Management, and Automation)ShudipPal
 

Similar a Common Testing Problems – Pitfalls to Prevent and Mitigate (20)

CTFL chapter 05
CTFL chapter 05CTFL chapter 05
CTFL chapter 05
 
Quality Assessment Handbook
Quality Assessment HandbookQuality Assessment Handbook
Quality Assessment Handbook
 
Pmt 05
Pmt 05Pmt 05
Pmt 05
 
Test Framework V0.1
Test Framework V0.1Test Framework V0.1
Test Framework V0.1
 
Test Management
Test ManagementTest Management
Test Management
 
Analytical Risk-based and Specification-based Testing - Bui Duy Tam
Analytical Risk-based and Specification-based Testing - Bui Duy TamAnalytical Risk-based and Specification-based Testing - Bui Duy Tam
Analytical Risk-based and Specification-based Testing - Bui Duy Tam
 
Risk based testing a new case study
Risk based testing   a new case studyRisk based testing   a new case study
Risk based testing a new case study
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint Presentation
 
chapter 7.ppt
chapter 7.pptchapter 7.ppt
chapter 7.ppt
 
Ch cie gra - stress-test-diffusion-model-and-scoring-performance
Ch cie   gra - stress-test-diffusion-model-and-scoring-performanceCh cie   gra - stress-test-diffusion-model-and-scoring-performance
Ch cie gra - stress-test-diffusion-model-and-scoring-performance
 
Dynamic Stress Test diffusion model and scoring performance
Dynamic Stress Test diffusion model and scoring performanceDynamic Stress Test diffusion model and scoring performance
Dynamic Stress Test diffusion model and scoring performance
 
Software testing
Software testingSoftware testing
Software testing
 
Chapter 2 - Test Management
Chapter 2 - Test ManagementChapter 2 - Test Management
Chapter 2 - Test Management
 
Prvt file test
Prvt file testPrvt file test
Prvt file test
 
Problem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMIProblem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMI
 
Problem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMIProblem Solving Tools and Techniques by TQMI
Problem Solving Tools and Techniques by TQMI
 
Ch14
Ch14Ch14
Ch14
 
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
Paul Gerrard - Advancing Testing Using Axioms - EuroSTAR 2010
 
Prototyping
PrototypingPrototyping
Prototyping
 
Software Engineering (Testing Activities, Management, and Automation)
Software Engineering (Testing Activities, Management, and Automation)Software Engineering (Testing Activities, Management, and Automation)
Software Engineering (Testing Activities, Management, and Automation)
 

Más de Donald Firesmith

Testing Types and Paradigms - 2015-07-13 - V11
Testing Types and Paradigms - 2015-07-13 - V11Testing Types and Paradigms - 2015-07-13 - V11
Testing Types and Paradigms - 2015-07-13 - V11Donald Firesmith
 
2015-NextGenTesting-Testing-Types-Updated
2015-NextGenTesting-Testing-Types-Updated2015-NextGenTesting-Testing-Types-Updated
2015-NextGenTesting-Testing-Types-UpdatedDonald Firesmith
 
Common System and Software Testing Pitfalls Checklist - 2014
Common System and Software Testing Pitfalls Checklist - 2014Common System and Software Testing Pitfalls Checklist - 2014
Common System and Software Testing Pitfalls Checklist - 2014Donald Firesmith
 
Common testing pitfalls tsp-2014 - 2014-11-03 v10
Common testing pitfalls   tsp-2014 - 2014-11-03 v10Common testing pitfalls   tsp-2014 - 2014-11-03 v10
Common testing pitfalls tsp-2014 - 2014-11-03 v10Donald Firesmith
 
Method Framework for Engineering System Architectures - 2014
Method Framework for Engineering System Architectures - 2014Method Framework for Engineering System Architectures - 2014
Method Framework for Engineering System Architectures - 2014Donald Firesmith
 
Engineering Safety and Security-Related Requirements
Engineering Safety and Security-Related RequirementsEngineering Safety and Security-Related Requirements
Engineering Safety and Security-Related RequirementsDonald Firesmith
 
Common Test Problems Checklist
Common Test Problems ChecklistCommon Test Problems Checklist
Common Test Problems ChecklistDonald Firesmith
 

Más de Donald Firesmith (7)

Testing Types and Paradigms - 2015-07-13 - V11
Testing Types and Paradigms - 2015-07-13 - V11Testing Types and Paradigms - 2015-07-13 - V11
Testing Types and Paradigms - 2015-07-13 - V11
 
2015-NextGenTesting-Testing-Types-Updated
2015-NextGenTesting-Testing-Types-Updated2015-NextGenTesting-Testing-Types-Updated
2015-NextGenTesting-Testing-Types-Updated
 
Common System and Software Testing Pitfalls Checklist - 2014
Common System and Software Testing Pitfalls Checklist - 2014Common System and Software Testing Pitfalls Checklist - 2014
Common System and Software Testing Pitfalls Checklist - 2014
 
Common testing pitfalls tsp-2014 - 2014-11-03 v10
Common testing pitfalls   tsp-2014 - 2014-11-03 v10Common testing pitfalls   tsp-2014 - 2014-11-03 v10
Common testing pitfalls tsp-2014 - 2014-11-03 v10
 
Method Framework for Engineering System Architectures - 2014
Method Framework for Engineering System Architectures - 2014Method Framework for Engineering System Architectures - 2014
Method Framework for Engineering System Architectures - 2014
 
Engineering Safety and Security-Related Requirements
Engineering Safety and Security-Related RequirementsEngineering Safety and Security-Related Requirements
Engineering Safety and Security-Related Requirements
 
Common Test Problems Checklist
Common Test Problems ChecklistCommon Test Problems Checklist
Common Test Problems Checklist
 

Último

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 

Último (20)

Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 

Common Testing Problems – Pitfalls to Prevent and Mitigate

  • 1. Common Testing Problems – Pitfalls to Prevent and Mitigate: Descriptions, Symptoms, Consequences, Causes, and Recommendations Donald G. Firesmith Page 1 of 111 © 2013 by Carnegie Mellon University
  • 2. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Table of Contents 1 Introduction ........................................................................................................................... 5 1.1 Usage ................................................................................................................................ 5 1.2 Problem Specifications ..................................................................................................... 6 1.3 Problem Interpretation...................................................................................................... 6 2 Testing Problems ................................................................................................................... 8 2.1 General Testing Problems ................................................................................................ 8 2.1.1 Test Planning and Scheduling Problems................................................................... 8 2.1.2 Stakeholder Involvement and Commitment Problems ........................................... 17 2.1.3 Management-related Testing Problems .................................................................. 21 2.1.4 Test Organization and Professionalism Problems .................................................. 28 2.1.5 Test Process Problems ............................................................................................ 32 2.1.6 Test Tools and Environments Problems ................................................................. 45 2.1.7 Test Communication Problems ............................................................................... 54 2.1.8 Requirements-related Testing Problems ................................................................. 60 2.2 Test Type Specific Problems.......................................................................................... 70 2.2.1 Unit Testing Problems ............................................................................................ 71 2.2.2 Integration Testing Problems .................................................................................. 72 2.2.3 Specialty Engineering Testing Problems ................................................................ 74 2.2.4 System Testing Problems ........................................................................................ 82 2.2.5 System of Systems (SoS) Testing Problems ........................................................... 84 2.2.6 Regression Testing Problems .................................................................................. 89 3 2BConclusion ....................................................................................................................... 97 3.1 Testing Problems ............................................................................................................ 97 3.2 Common Consequences ................................................................................................. 97 3.3 Common Solutions ......................................................................................................... 98 4 Potential Future Work ..................................................................................................... 100 5 Acknowledgements ........................................................................................................... 101 © 2012-2013 by Carnegie Mellon University Page 2 of 111
  • 3. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations © 2012-2013 by Carnegie Mellon University Page 3 of 111
  • 4. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Abstract This special report documents the different types of problems that commonly occur when testing software-reliant systems. These 77 problems are organized into 14 categories. Each of these problems is given a title, description, a set of potential symptoms by which it can be recognized, a set of potential negative consequences that can result if the problem occurs, a set of potential causes for the problem, and recommendations for avoiding the problem or solving the should it occur. © 2012-2013 by Carnegie Mellon University Page 4 of 111
  • 5. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations 1 Introduction Many testing problems can occur during the development or maintenance of software-reliant systems and software applications. While no project is likely to be so poorly managed and executed as to experience the majority of these problems, most projects will suffer several of them. Similarly, while these testing problems do not guarantee failure, they definitely pose serious risks that need to be managed. Based on over 30 years of experience developing systems and software as well as performingnumerous independent technical assessments, this technical report documents 77 problems that have been observed to commonly occur during testing. These problems have beencategorized as follows: • General Testing Problems Test Planning and Scheduling Problems Stakeholder Involvement and Commitment Problems Management-related Testing Problems Test Organization and Professionalism Problems Test Process Problems Test Tools and Environments Problems Test Communication Problems Requirements-related Testing Problems • Testing Type Specific Problems Unit Testing Problems Integration Testing Problems Specialty Engineering Testing Problems System Testing Problems System of Systems (SoS) Problems Regression Testing Problems 1.1 Usage Theinformation describing each of the commonly occurring testing problems can be used: • To improve communication regarding commonly occurring testing problems • As training materials for testers and the stakeholders of testing • As checklists when: Developing and reviewing an organizational or project testing process or strategy Developing and reviewing test plans, the testing sections of system engineering management plans (SEMPs), and software development plans (SDPs) Evaluating the testing-related parts of contractor proposals Evaluating test plans and related documentation (quality control) © 2012-2013 by Carnegie Mellon University Page 5 of 111
  • 6. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Evaluating the actual as-performed testing process during the oversight 1 (quality 0F assurance) Identifying testing risks and appropriate risk mitigation approaches • To categorize testing problems for metrics collection, analysis, and reporting • As an aid to identify testingareas potentially needing improvement during project post mortems (post implementation reviews) Although each of these testing problems has been observed on multiple projects, it is entirely possible that you may have testing problems not addressed by this document. 1.2 Problem Specifications The following tables document each testing problem with the following information: • Title – a short descriptive nameof the problem • Description – a brief definition of the problem • Potential Symptoms (how you will know) –potential symptoms that indicate possible existence of the problem • Potential Consequences (why you should care) –potential negative consequences to expect if the problem is not avoided or solved2 • Potential Causes –potential root and proximate causes of the problem3 • Recommendations (what you should do) –recommended (prepare, enable, perform, and verify) actions to take to avoid or solve the problem4 • Related Problems – a list of links to other related testing problems 1.3 Problem Interpretation The goal of testing is not to prove that something works, but rather to demonstrate that it does not. 5A good tester assumes that there are always defects (an extremely safe assumption) and 2F 1 Not all testing problems have the same probability or harm severity. These problem specifications are not intended to be used as part of a quantitative scoring scheme based on the number of problems found. Instead, they are offered to support qualitative review and planning. 2 Note that the occurrence of a potential consequence may be a symptom by which the problem is recognized. 3 Causes are important because recommendations should be based on the causes. Also, recommendation to address root causes may be more important than proximate causes, because recommendations addressing proximate causes may not combat the root cause and therefore may not prevent the problem under all circumstances. 4 Some of the recommendations may no longer be practical after the problem rears its ugly head. It is usually much easier to avoid the problem or nip it in the bud instead of fixing it when the project is well along or near completion. For example, several possible ways exist to deal with inadequate time to complete testing including (1) delay the test completion date and reschedule testing, (2) keep the test completion date and (a) reduce the scope of delivered capabilities, (b) reduce the amount of testing, (c) add testers, and (d) perform more parallel testing (e.g., different types of testing simultaneously). Selection of the appropriate recommendations to follow therefore depends on the actual state of the project. 5 Although tests that pass are often used as evidence that the system (or subsystem) under test meets its (derived and allocated) requirements, testing can never be exhaustive for even a simple system and therefore cannot “prove” that all requirements are met. However, system and operational testing can provide evidence that the system under test is “fit for purpose” and ready to be placed into operation.For example, certain types of testing © 2012-2013 by Carnegie Mellon University Page 6 of 111
  • 7. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations seeks to uncover them.Thus, a good test is one that causes the thing being tested to fail so that the underlying defect(s) can be found and fixed.6 Defects are not restricted to violations of specified (or unspecified) requirements. Some of the other important types of defects are: • inconsistencies between the architecture, design, and implementation • violations of coding standards • lack of input checking (i.e., unexpected data) • the inclusion of safety or security vulnerabilities (e.g., the use of inherently unsafe language features or lack of verification of input data) may provide evidence required for safety and security accreditation and certification. Nevertheless, a tester must take a “show it fails” rather than a “show it works” mindsetto be effective. 6 Note that testing cannot identify all defects because some defects (e.g., the failure to implement missing requirements) do not cause the system to fail in a manner detectable by testing. © 2012-2013 by Carnegie Mellon University Page 7 of 111
  • 8. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations 2 Testing Problems The commonly occurring testing problems documented in this section are categorized as either general testing problems or testing type specific problems. 2.1 General Testing Problems The following testing problems can occur regardless of the type of testing being performed: • Test Planning and Scheduling Problems • Stakeholder Involvement and Commitment Problems • Management-related Testing Problems • Test Organization and Professionalism Problems • Test Process Problems • Test Tools and Environments Problems • Test Communication Problems • Requirements-related Testing Problems 2.1.1 Test Planning and Scheduling Problems The following testing problems are related to test planning and estimation: • GEN-TPS-1 No Separate Test Plan • GEN-TPS-2 Incomplete Test Planning • GEN-TPS-3 Test Plans Ignored • GEN-TPS-4 Test Case Documents rather than Test Plans • GEN-TPS-5 Inadequate Test Schedule • GEN-TPS-6 Testing is Postponed 2.1.1.1 GEN-TPS-1 No Separate Test Plan Description: There are no separate testing-specific planning document(s). Potential Symptoms: • Thereisno separate Test and Evaluation Master Plan (TEMP) or System/Software Test Plan (STP). • Thereareonlyincomplete high-level overviews of testing in System Engineering Master Plan (SEMP) and System/Software Development Plan (SDP). Potential Consequences: • The test planning parts of these other documents arenot written by testers. • Testing is not adequately planned. • The test plans are not adequately documented. • It is difficult or impossible to evaluate the planned testing process. © 2012-2013 by Carnegie Mellon University Page 8 of 111
  • 9. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Testing is inefficiently and ineffectively performed. Potential Causes: • The customer has not specified the development and delivery of a separate test plan. • The system engineering, software engineering, or testing process has not included the development of a separate test plan. • There was no template for the content and format of a separate test plan. • Management, the customer representative, or the testers did not understand the: scope, complexity, and importance of testing value of a separate test plan Recommendations: • Prepare: Reuse or create a standard template and content/format standard for test plans. Include one or more separateTEMPsand/or STPs as deliverable work products in the contract. Include the development and delivery of test planning documents in the project master schedule (e.g., as part of major milestones). • Enable: Provide sufficient resources (staffing and schedule) for the development of one or more separate test plans. • Perform: Develop and deliver one or more separateTEMPsand/or STPs. • Verify: Verify the existence and delivery of one or more separate test planning documents. Do not accept incomplete high-level overviews of testing in the SEMP and/orSDP as the only test planning documentation. 2.1.1.2 GEN-TPS-2 Incomplete Test Planning Description: The test planning documents are incomplete. Potential Symptoms: • The test planning documents are incomplete, missing some or all7 of the: references – listing of all relevant documents influencing testing test goals and objectives – listing the high-level goals and subordinate objectives of the testing program scope of testing – listing the component(s), functionality, and/or capabilities to be 7 This does not mean that every test plan must include all of this information; test plans should include only the information that is relevant for the current project. It is quite reasonable to reuse much/most of this information in multiple test plans; just because it is highly reusable does not mean that it is meaningless boilerplate that can be ignored. Test plans can be used to estimate the amount of test resources (e.g., time and tools) needed as well as the skills/expertise that the testers need. © 2012-2013 by Carnegie Mellon University Page 9 of 111
  • 10. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations tested (and any that are not to be tested) test levels – listing and describing the relevant levels of testing (e.g., unit, subsystem integration, system integration, system, and system of systems testing) test types – listing and describing the types of testing such as: blackbox, graybox, and whitebox testing developmental vs. acceptance testing initial vs. regression testing manual vs. automated mode-based testing (system start-up 8, operational mode, degraded mode, training 7F mode, and system shutdown) normal vs. abnormal behavior (i.e., nominal vs. off-nominal, sunny day vs. rainy day use case paths) quality criteria based testing such as availability, capacity (e.g., load and stress testing), interoperability, performance, reliability, robustness9, safety, security (e.g., penetration testing), and usability testing static vs. dynamic testing time- or date-based testing testing methods and techniques – listing and describing the planned testing methods and techniques (e.g., boundary value testing, penetration testing, fuzz testing, alpha and beta testing) to be used including the associated: test case selection criteria – listing and describing the criteria to be used to select test cases (e.g., interface-based, use-case path,boundary value testing, and error guessing) test entrance criteria – listing the criteria that must hold before testing should begin test exit/completion criteria – listing the test completion criteria (e.g., based on different levels of code coverage such as statement, branch, condition coverage) test suspension and resumption criteria test completeness and rigor – describing how the rigor and completeness of the testing varies as a function of mission-, safety-, and security-criticality resources: staffing – listingthe different testing roles and teams, their responsibilities, their associated qualifications (e.g., expertise, training, and experience), and their numbers environments – listing and description of required computers (e.g., laptops and servers), test tools (e.g., debuggers and test management tools), test environments (software and hardware test beds), and test facilities testing work products – listing and describing of the testing work products to be 8 This includes combinations such as the testing of system start-up when hardware/software components fail. 9 This includes the testing of error, fault, and failure tolerance. © 2012-2013 by Carnegie Mellon University Page 10 of 111
  • 11. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations produced or obtained such as test documents (e.g., plans and reports), test software (e.g., test drivers and stubs), test data (e.g., inputs and expected outputs), test hardware, and test environments testing tasks – listing and describing the major testing tasks (e.g., name, objective, preconditions, inputs, steps, postconditions, and outputs) testing schedule – listing and describing the major testing milestones and activities in the context of the project development cycle, schedule, and major project milestones reviews, metrics, and status reporting – listing and describing the test-related reviews (e.g., Test Readiness Review), test metrics (e.g., number of tests developed and run), and status reports (e.g., content, frequency, and distribution) dependencies of testing on other project activities – such as the need to incorporate certain hardware and software components into test beds before testing using those environments can begin acronym list and glossary Potential Consequences: • Testers and stakeholders in testing may not understand the primary objective of testing (i.e., to find defects so that they can be fixed). • Some levels and types of tests may notbe performed, allowing certain types of residual defects to remain in the system. • Some testing may be ad hoc and therefore inefficient and ineffectual. • Mission-, safety-, and security-critical software may not be sufficiently tested to the appropriate level of rigor. • Certain types of test cases may be ignored, resulting in related residual defects in the tested system. • Test completion criteria may be based more on schedule deadlines than on the required degree of freedom from defects. • Adequate amounts of test resources (e.g., e.g., testers, test tools, environments, and test facilities) may not be made available because they are not in the budget. • Some testers may not have adequate expertise, experience, and skills to perform all of the types of testing that needs to be performed. Potential Causes: • There were no templates or content and format standards for separate test plans. • The associated templates or content and format standardswere incomplete. • The test planning documents were written by people (e.g., managers or developers) who did not understand the scope, complexity, and importance of testing. Recommendations: • Prepare: Reuse or create a standard template and/or content/format standard for test plans. • Enable: Provide sufficient resources (staffing and schedule) to develop complete test plan(s). • Perform: © 2012-2013 by Carnegie Mellon University Page 11 of 111
  • 12. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Use a proper template and/orcontent/format standard to develop the test plans (i.e., ones that are derived from test plan standards and tailored if necessary for the specific project). • Verify: Verify during inspections/reviews that all test plans are sufficiently complete Do not accept incomplete test plans. Related Problems:GEN-TOP-2 Unclear Testing Responsibilities, GEN-PRO-8 Inadequate Test Evaluations, GEN-TTE-7 Tests Not Delivered, TTS-SPC-1 Inadequate Capacity Requirements, TTS-SPC-2 Inadequate Concurrency Requirements, TTS-SPC-3 Inadequate Performance Requirements, TTS-SPC-4 Inadequate Reliability Requirements, TTS-SPC-5 Inadequate Robustness Requirements, TTS-SPC-6 Inadequate Safety Requirements, TTS-SPC- 7 Inadequate Security Requirements, TTS-SPC-8 Inadequate Usability Requirements, TTS- SoS-1 Inadequate SoS Planning, TTS-REG-5 Disagreement over Maintenance Test Resources 2.1.1.3 GEN-TPS-3Test Plans Ignored Description: The test plans are ignored once developed and delivered. Potential Symptoms: • The way the testers perform testing is not consistent with the relevant test plan(s). • The test plan(s) are never updated after initial delivery shortly after the start of the project. Potential Consequences: • Management may not have budgeted sufficient funds to the pay for the necessary test resources e.g., testers, test tools, environments, and test facilities). • Management may not have made adequate amounts of test resources available because they are not in the budget. • Testers will not have an approved document that justifies: their request for additional needed resources when they need them their insistence that certain types of testing is necessary and must not be dropped when the schedule becomes tight • Some testers may not have adequate expertise, experience, and skills to perform all of the types of testing that needs to be performed. • The test plan may not be maintained. • Some levels and types of tests may not be performed so that certain types of residual defects to remain in the system. • Some important test cases may not be developed and executed. • Mission-, safety-, and security-critical software may not be sufficiently tested to the appropriate level of rigor. • Test completion criteria may be based more on schedule deadlines than on the required degree of freedom from defects. Potential Causes: © 2012-2013 by Carnegie Mellon University Page 12 of 111
  • 13. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • The testers may have forgotten some of the test plan contents. • The testers may have thought that the only reason a test plan was developed was because it was a deliverable in the contract that needed to be check off. • The test plan(s) may be so incomplete and at such a generic high level of abstraction as to be relatively useless. Recommendations: • Prepare: Have project management (both administrative and technical), testers, and quality assurance personnel read and review the test plan. Have management (acquisition and project) sign off on the completed test plan document. Use the test plan as input to the project master schedule and work breakdown schedule (WBS). • Enable: Develop a short check list from the test plan(s) for use when assessing the performance of testing. • Perform: Have the test manager periodically review the test work products and as-performed test process against the test plan(s). Have the test team update the test plan(s) as needed. • Verify: Have the testers present their work and status at project and test-team status meetings. Have quality engineering periodically review the test work products (quality control) and as performed test process (quality assurance). Have progress, productivity, and quality test metrics collected, analyzed, and reported to project and customer management. Related Problems:GEN-TPS-2Incomplete Test Planning 2.1.1.4 GEN-TPS-4Test Case Documents rather than Test Plans Description: Test case documents documenting specific test cases are labeled test plans. Potential Symptoms: • The “test plan(s)” contain specific test cases including inputs, test steps, expected outputs, and sources such as specific requirements (blackbox testing) or design decisions (whitebox testing). • The test plans do not contain the type of general planning information listed in GEN-TPS-2 Incomplete Test Planning. Potential Consequences: • All of the negative consequences of GEN-TPS-2 Incomplete Test Planning may occur. • The test case documents may not be maintained. © 2012-2013 by Carnegie Mellon University Page 13 of 111
  • 14. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Potential Causes: • There may have been no template or content format for the test case documents. • The test plan authors may not have had adequate expertise, experience, and skills to develop test plans or know their proper content. Recommendations: • Prepare: Provide the test manager and testers with at least minimal training in test planning. • Enable: Provide a proper test plan template. Provide a proper content and format standard for test plans. Add test plans and test case documents to the project technical glossary. • Perform: Develop the test plan in accordance with the test plan template or content and format standard. Develop the test case documents in accordance with the test case document template and/or content and format standard. Where practical, automate the test cases so that the resulting tests (extended with comments) replace the test case documents so that the distinction is clear (i.e., the test plan is a document meant to be read whereas the test case is meant to be executable). • Verify: Have the test plan(s) reviewed against the associated template or content and format standard prior to acceptance. Related Problems:GEN-TPS-2 Incomplete Test Planning 2.1.1.5 GEN-TPS-5 Inadequate Test Schedule Description: The testing schedule is inadequate to permit proper testing. Potential Symptoms: • Testing is significantly incomplete and behind schedule. • An insufficient time is allocated in the project master schedule to perform all: test activities (e.g., automating testing, configuring test environments, and developing test data, test scripts/drivers, and test stubs) appropriate tests (e.g., abnormal behavior, quality requirements, regression testing) 10 8F • Testers are working excessively and unsustainably long hours and days per week in an attempt to meet schedule deadlines. 10 Note that an agile (i.e., iterative, incremental, and concurrent) development/life cycle greatly increases the amount of regression testing needed (although this increase in testing can be largely offset by highly automating regression tests). Although testing can never be exhaustive, more time is typically needed for adequate testing unless testing can be made more efficient. For example, fewer defects could be produced and these defects could be found and fixed earlier and thereby be prevented from reaching the current iteration. © 2012-2013 by Carnegie Mellon University Page 14 of 111
  • 15. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Potential Consequences: • Testers are exhausted and therefore making an unacceptably large number of mistakes. • Tester productivity (e.g., importance of defects found and number of defects found per unit time) is decreasing. • Customer representatives, managers, and developers have a false sense of security that the system functions properly. • There is a significant probability that the system or software will be delivered late with an unacceptably large number of residual defects. Potential Causes: • The overall project schedule was insufficient. • The size and complexity of the system were underestimated. • The project master plan was written by people (e.g., managers, chief engineers, or technical leads) who do not understand the scope, complexity, and importance of testing. • The project master plan was developed without input from the test team(s). Recommendations: • Prepare: Provide evidence-based estimates of the amount of testing and associated test effort that will be needed. Ensure that adequate time for testing is included in the program master schedule and test team schedules including the testing of abnormal behavior and the specialty engineering testing of quality requirements (e.g., load testing for capacity requirements and penetration testing for security requirements). 11 9F Provide adequate time for testing in change request estimates. • Enable: Deliver inputs to the testing process (e.g., requirements, architecture, design, and implementation) earlier and more often (e.g., as part of an incremental, iterative, parallel – agile – development cycle). Provide sufficient test resources (e.g., number of testers, test environments, and test tools). If at all possible, do not reduce the testing effort in order to meet a delivery deadline. • Perform: Automate as much of the regression testing as is practical, and allocate sufficient resources to maintain the automated tests. 12 10F • Verify: Verify that amount of time scheduled for testing is consistent with the evidence-based 11 Also integrate the testing process into the software development process. 12 When there is insufficient time to perform manual testing, it may be difficult to justify the automation of these tests. However, automating regression testing is not just a maintenance issue. Even during initial development, there should typically be a large amount of regression testing, especially if an iterative and incremental development cycle is used. Thus, ignoring the automation of regression testing is often a case of being penny wise and pound foolish. © 2012-2013 by Carnegie Mellon University Page 15 of 111
  • 16. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations estimates of need time. Related Problems:TTS-SoS-5 SoS Testing Not Properly Scheduled 2.1.1.6 GEN-TPS-6 Testing is Postponed Description: Testing is postponed until late in the development schedule. Potential Symptoms: • Testing is scheduled to be performed late in the development cycle on the project master schedule. • Little or no unit or integration testing: is planned is being performed during the early and middle stages of the development cycle Potential Consequences: • There is insufficient time left in the schedule to correct any major defects found. 13 11F • It is difficult to show the required degree of test coverage. • Because so much of the system has been integrated before the beginning of testing, it is very difficult to find and localize defects that remain hidden within the internals of the system. Potential Causes: • The project is using a strictly-interpreted traditional sequential Waterfall development cycle. • Management was not able to staff the testing team early during the development cycle. • Management was primarily interested in system testing and did not recognize the need for lower-level (e.g., unit and integration) testing. Recommendations: • Prepare: Plan and schedule testing to be performed iteratively, incrementally, and in a parallel manner (i.e., agile) starting early during development. Provide training in incremental iterative testing. Incorporate iterative and incremental testing into the project’s system/software engineering process. • Enable: Provide adequate testing resources (staffing, tools, budget, and schedule) early during development. • Perform: Perform testing in an iterative, incremental, and parallel manner starting early during the development cycle. 13 An interesting example of this is the Hubble telescope. Testing of the mirror’s focusing was postponed until after launch, resulting in an incredibly expensive repair mission. © 2012-2013 by Carnegie Mellon University Page 16 of 111
  • 17. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Verify: Verify in an ongoing manner (or at the very least during major project milestones) that testing is being performed iteratively, incrementally, and in parallel with design, implementation, and integration. Use testing metrics to verify status and ongoing progress. Related Problems:GEN-PRO-1 Testing and Engineering Process not Integrated 2.1.2 Stakeholder Involvement and Commitment Problems The following testing problems are related to stakeholder involvement in and commitment to the testing effort: • GEN-SIC-1 Wrong Testing Mindset • GEN-SIC-2 Unrealistic Testing Expectations / False Sense of Security • GEN-SIC-3 Lack of Stakeholder Commitment 2.1.2.1 GEN-SIC-1 Wrong Testing Mindset Description: Some of the testers and other testing stakeholders have the wrong testing mindset. Potential Symptoms: • Some testers and other testing stakeholdersbegin testing assumingthat the system/software works. • Testers believe that their job is to verify or “prove” that the system/software works. 14 12 F • Testing is used to demonstrate that the system/software works properly rather than to determinewhere and how it fails. • Only normal (“sunny day”, “happy path”, or “golden path”) behavior is being tested. • There is little or no testing of: exceptional or fault/failure tolerant(“rainy day”) behavior input data (e.g., range testing to identify incorrect handling of invalid input values) • Test inputsonly include middle of the road values rather than boundary values and corner cases. Potential Consequences: • There is a high probability that: the delivered system or software will contain a significant number of residual defects, especially related to abnormal behavior (e.g., exceptional use case paths) these defects will unacceptably reduce its reliability and robustness (e.g., error, fault, and failure tolerance) • Customer representatives, managers, and developers have a false sense of security that the system functions properly. 14 Using testing to “prove” that their software works is most likely to become a problem when developers test their own software (e.g., with unit testing and with small cross-functional or agile teams). © 2012-2013 by Carnegie Mellon University Page 17 of 111
  • 18. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Potential Causes: • Testers were taught or explicitly told that their job is to verify or “prove” that the system/software works. • Developers are testing their own software15 so that there is a “conflict of interest” (i.e., build software that works and show that their software does not work). This is especially a problem with small, cross-functional development organizations/teams that “cannot afford” to have separate testers (i.e., professional testers who specialize in testing). • There was insufficient schedule allocated for testing so that there is only sufficient time to test the normal behavior (e.g., use case paths). • The organizational culture is very success oriented so that looking “too hard” for problems is (implicitly) discouraged. • Management gave the testers the strong impression that they do not want to hear any “bad” news (i.e., that there are any significant defects being found in the system). Recommendations: • Prepare: Explicitly state in the project test plan that the primary goal of testing is to: find defects by causing system faults and failuresrather than to demonstrate that there are no defects break the system rather than toprove that it works • Enable: Provide test training that emphasizes uncovering defects by causing faults or failures. Provide sufficient time in the schedule for testing beyond the basic success paths. Hire new testers who exhibit a strong “destructive” mindset to testing. • Perform: In addition to test cases that verify all normal behavior, emphasize looking for defects where they are most likely to hide (e.g., boundary values, corner cases, and input type/range verification). 16 1 3F Incentivize testers based more on the number of significant defects they uncoverthan merely on the number requirements “verified” or test cases ran.17 Foster a healthy competition between developers (who seek to avoid inserting defects) and testers (who seek to find those defects). • Verify: Verify that the testers exhibit a testing mindset. 15 Developers typically do their own unit level (i.e., lowest level) testing. With small, cross functional (e.g., agile) teams, it is becoming more common for developers to also do integration and subsystem testing. 16 Whereas tests that verify nominal behavior are essential, testers must keep in mind that there are typically many more ways for the system/software under test to fail than to work properly. Also, nominal tests must remain part of the regression test suite even after all known defects are fixed because changes could introduce new defects that cause nominal behavior to fail. 17 Take care to avoid incentivizing developers to insert defects into their own software so that they can then find them during testing. © 2012-2013 by Carnegie Mellon University Page 18 of 111
  • 19. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Related Problems:GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate Communication Concerning Testing, TTS-UNT-3 Unit Testing Considered Unimportant 2.1.2.2 GEN-SIC-2 Unrealistic Testing Expectations / False Sense of Security Description: Testers and other testing stakeholders have unrealistic testing expectations that generate a false sense of security. Potential Symptoms: • Testing stakeholders (e.g., managers and customer representatives) and some testers falsely believe that: Testing detects all (or even the majority of) defects. 18 14F Testing proves that there are no remaining defects and that the system therefore works as intended. Testing can be, for all practical purposes, exhaustive. Testing can be relied on for all verification. (Note that some requirements are better verified via analysis, demonstration, certification, and inspection.) Testing (if it is automated) will guarantee the quality of the tests and reduce the testing effort 19 15 F • Managers and other testing stakeholders may not understand that: Test automation requires specialized expertise and needs to be budgeted for the effort required to develop, verify, and maintain the automated tests. A passed test could result from a weak/incorrect test rather than a lack of defects. A truly successful/useful test is one that finds one or more defects, whereas a passed test only shows that the system worked in that single specific instance. Potential Consequences: • Testers and other testing stakeholders have a false sense of security that the system or software will work properly on delivery and deployment. • Non-testing forms of verification (e.g., analysis, demonstration, inspection, and simulation) are not given adequate emphasis. Potential Causes: • Testing stakeholders and testers were not exposed to research results that document the relatively large percentage of residual defects that typically remain after testing. • Testers and testing stakeholders have not been trained in verification approaches (e.g., analysis, demonstration, inspection) other than testing and their relative pros and cons. • Project testing metrics do not include estimates of residual defects. 18 Testing typically finds less than half of all latent defects and is not the most efficient way of detecting many defects. 19 This depends on the development cycle and the volatility of the system’s requirements, architecture, design, and implementation. © 2012-2013 by Carnegie Mellon University Page 19 of 111
  • 20. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Recommendations: • Prepare: Collect information on the limitations of testing. Collect information on when and how to augment testing with other types of verification. • Enable: Provide basic training in verification methods including their associated strengths and limitations. • Perform: Train and mentor managers, customer representatives, testers, and other test stakeholders concerning the limits of testing: Testing will not detect all (or even a majority of) defects. No testing is truly exhaustive. Testing cannot prove (or demonstrate) that the system works under all combinations of preconditions and trigger events. A passed test could result from a weak test rather than a lack of defects. A truly successful test is one that finds one or more defects. Do not rely on testing for the verification of all requirements, especially architecturally- significant quality requirements. Collect, analyze, and report testing metrics that estimate the number of defectsremaining after testing. • Verify: Verify that testing stakeholders understand the limitations of testing. Verify that testing is not the only type of verification being used. Verify that the number of defects remaining is estimated and reported. Related Problems:GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate Communication Concerning Testing, TTS-REG-2 Regression Testing not Performed 2.1.2.3 GEN-SIC-3 Lack of Stakeholder Commitment Description: There is a lack of adequate stakeholder commitment to the testing effort. Potential Symptoms: • Stakeholders (especially customers and management) are not providing sufficient resources (e.g., people, schedule, tools, funding) for the testing effort. • Stakeholders are unavailable for the review of test assets such as test plans and important test cases. • Stakeholders (e.g., customer representatives) point out defects in test assets after they have been reviewed. • Stakeholders do not support testing when resources must be cut (e.g., due to schedule slippages and budget overruns). © 2012-2013 by Carnegie Mellon University Page 20 of 111
  • 21. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Potential Consequences: • Testing is less effective due to inadequate resources. • Stakeholders (e.g., customer representatives) reject reviewed test assets. • The testing effort losesneeded resources when the schedule slips or the budget overruns. Potential Causes: • Stakeholders did not understand the scope, complexity, and importance of testing. • Stakeholders were not provided adequate estimates of the resources needed to properly perform testing. • Stakeholders wereextremely busy with other duties. • The overall project schedule and budget estimates were inadequate, thereby forcing cuts in testing. Recommendations: • Prepare: Convey the scope, complexity, and importance of testing to the testing stakeholders. • Enable: Provide stakeholders with adequate estimates of the resources needed to properly perform testing. • Perform: Officially request sufficient testing resources from the testing stakeholders. Obtain commitments of support for authoritative stakeholders at the beginning of the project. • Verify: Verify that the testing stakeholders are providing sufficient resources (e.g., people, schedule, tools, funding) for the testing effort. Related Problems:GEN-MGMT-1 Inadequate Test Resources, GEN-MGMT-5 Test Lessons Learned Ignored,GEN-MGMT-2 Inappropriate External Pressures, GEN-COM-4Inadequate Communication Concerning Testing, TTS-SoS-4 Inadequate Funding for SoS Testing, TTS- SoS-6 Inadequate Test Support from Individual Systems 2.1.3 Management-related Testing Problems The following testing problems are related to stakeholder involvement in and commitment to the testing effort: • GEN-MGMT-1 Inadequate Test Resources • GEN-MGMT-2 Inappropriate External Pressures • GEN-MGMT-3 Inadequate Test-related Risk Management • GEN-MGMT-4 Inadequate Test Metrics • GEN-MGMT-5 Test Lessons Learned Ignored © 2012-2013 by Carnegie Mellon University Page 21 of 111
  • 22. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations 2.1.3.1 GEN-MGMT-1 Inadequate Test Resources Description: Management allocates an inadequate amount of resources to testing. Potential Symptoms: • The test planning documents and schedulesfail to provide for adequate test resources such as: test time in schedule with inadequate schedule reserves trained and experienced testers and reviewers funding test tools and environments (e.g., integration test beds and repositories of test data) Potential Consequences: • Adequate test resources will likely not be provided to perform sufficient testing within schedule and budget limitations. • An unnecessary number of defects may make it through testing and into the deployed system. Potential Causes: • Testing stakeholders may not understand the scope, complexity, and importance of testing, and therefore its impact on the resources needed to properly perform testing. • Estimates of needed testing resources may not be based on any evidenced-based cost/effort models. • Resource estimates may be informally made by management without input from the testing organization, especially those testers who will be actually performing the testing tasks. • Resource estimates may be based on available resources rather than resource needs. • Management may believe that the testers have padded their estimates and therefore cut the tester’s estimates. • Testers and testing stakeholders may be being overly optimistic so that their informal estimates of needed resources are based on best case scenarios rather than most likely or worst case scenarios. Recommendations: • Prepare: Ensure that testing stakeholders understand the scope, complexity, and importance of testing, and therefore its impact on the resources needed to properly perform testing. • Enable: Begin test planning at project inception (e.g., at contract award or during proposal development). Train testers in the use of evidence-based cost/effort models to estimate the amount of testing resources needed. • Perform: Use evidenced-based cost/effort models to estimate the needed testing resources. Officially request sufficient testing resources from the testing stakeholders. © 2012-2013 by Carnegie Mellon University Page 22 of 111
  • 23. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Ensure that the test planning documents, schedules, and project work breakdown structure (WBS) provide for adequate levels of these test resources. Obtain commitments of support for authoritative stakeholders at the beginning of the project. • Verify: Verify that the testing stakeholders are providing sufficient resources (e.g., people, schedule, tools, funding) for the testing effort. Related Problems:GEN-SIC-3 Lack of Stakeholder Commitment, GEN-TOP-3 Inadequate Testing Expertise 2.1.3.2 GEN-MGMT-2 Inappropriate External Pressures Description: Testers are subject to inappropriate external pressures, primarily from managers. Potential Symptoms: • Managers (or possibly customers or developers) are dictating to the testers what constitutes a bug or a defect worth reporting. • Managerial pressure exists to: inappropriately cut corners (e.g., only perform “sunny day” testing in order to meet schedule deadlines inappropriately lower the severity and priority of reported defects not find defects (e.g., until after delivery because the project is so far behind schedule that there is no time to fix any defects found) Potential Consequences: • If the testers yield to this pressure, then the test metrics do not accurately reflect either the true state of the system / software or the status of the testing process. • The delivered system or software contains an unacceptably large number of residual defects. Potential Causes: • The project is significantly behind schedule and/or over budget. • There is insufficient time until the delivery/release date to fix a significant number of defects that were found via testing. • The project is in danger of being cancelled due to lack of performance. • Management is highly risk adverse and thereforedid not want to officially label any testing risk as a risk. Recommendations: • Prepare: Establish criteria for determining the priority and severity of reported defects. • Enable: Ensure that trained testers determine what constitutes a bug or a defect worth reporting. Place the manager of the testing organization at the same or higher level as the project © 2012-2013 by Carnegie Mellon University Page 23 of 111
  • 24. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations manager in the organizational hierarchy (i.e., have the test manager report independently of the project manager).20 • Perform: Support testers when they oppose any inappropriate managerial pressure that would have them violate their professional ethics. Customer representatives must insist on proper testing. • Verify: Verify that the testers are the ones who decide what constitutes a reportable defect. Verify that the testing manager reports independently of the project manager. Related Problems:GEN-SIC-1 Wrong Testing Mindset, GEN-TOP-1 Lack of Independence 2.1.3.3 GEN-MGMT-3 Inadequate Test-related Risk Management Description: There are too few test-related risks identified in the project’s official risk repository. 21F Potential Symptoms: • Managers are highly risk adverse, treating risk as if it were a “four letter word”. 17 F • Because adding risks to the risk repository is looked on as a symptom of management failure, risks (including testing risks) are mislabeled as issues or concerns so that they need not be reported as an official risk. • There arefew if anytest-related risks identified in the project’s official risk repository. • The number of test-related risks is unrealistically low. • The identified test-related risks have inappropriately low probabilities, low harm severities, and low priorities. • The identified test risks have no: associated risk mitigation approaches one assigned as being responsible for the risk • The test risks are never updated (e.g., additions or modification) over the course of the project. • Testing risks are not addressed in either the test plan(s) or the risk management plan. Potential Consequences: • Testing risks are not reported. • Management and acquirer representatives are unaware of their existence. • Testing risks are not being managed. • The management of testing risks is not given sufficiently high priority. Potential Causes: • Management ishighly risk adverse. 20 Note that this will only help if the test manager is not below the manager applying improper pressure. 21 These potential testing problems can be viewed as generic testing risks. © 2012-2013 by Carnegie Mellon University Page 24 of 111
  • 25. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Managers strongly communicate their preference that only a small number of the most critical risks be entered into the project risk repository. • The people responsible for risk management and managing the risk repository have never been trained or exposed to the many potential test-related risks (e.g., those associated with the commonly occurring testing problems addressed in this document). • The risk management process strongly emphasizes system-specific or system-level (as opposed to software-level) risks and tends to not address any development activity risks (such as those associated with testing). • It is early in the development cycle before sufficient testing has begun. • There have been few if any evaluations of the testing process. • There has been little if any oversight of the testing process. Recommendations: • Prepare: Determine management’s degree of risk aversion and attitude regarding inclusion of risks in the project risk repository. • Enable: Ensure that the people responsible for risk management and managing the risk repository are aware of the many potential test-related risks. • Perform: Identify test-related risks and incorporate them into the official project risk repository. Provide test-related risks with realistic probabilities, harm severities, and priorities. • Verify: Verify that the risk repository contains an appropriate number of testing risks. Verify that there is sufficient management and quality assurance oversight and evaluation of the testing process. Related Problems: GEN-SIC-2 Unrealistic Testing Expectations / False Sense of Security 2.1.3.4 GEN-MGMT-4 Inadequate Test Metrics Description: Insufficient test metrics are being produced, analyzed, and reported. Potential Symptoms: • Insufficient or no test metrics are being produced, analyzed, and reported. • The primary test metrics (e.g., number of tests 22, number of tests needed to meet adequate 18F or required test coverage levels, number of tests passed/failed, number of defects found) show neither the productivity of the testers nor their effectiveness at finding defects (e.g., 22 Note that the number of tests metric does not indicate the effort or complexity of identifying, analyzing, and fixing defects. © 2012-2013 by Carnegie Mellon University Page 25 of 111
  • 26. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations defects found per test or per day). • The number of latent undiscovered defects remaining is not being estimated (e.g., using COQUALMO 23). 19 F • Management measures tester productivity strictly in terms of defects found per unit time, ignoring the importance or severity of the defects found. Potential Consequences: • Managers, testers, and other stakeholders in testing do not accurately know the quality of testing, the importance of the defects being found, or the number of residual defects in the delivered system or software. • Managers do not know the productivity of the testers and their effectiveness at finding of important defects, thereby making it difficult to improve the testing process. • Testers concentrate on finding lots of (unimportant) defects rather than finding critical defects (e.g., those with mission-critical, safety-critical, or security-critical ramifications). • Customer representatives, managers, and developers have a false sense of security that the system functions properly. Potential Causes: • Project management (including the managers/leaders of test organizations/teams) are not familiar with the different types of testing metrics (e.g., quality, status, and productivity) that could be useful. • Metrics collection, analysis, and reporting is at such a high level that individual disciplines (such as testing) are rarely assigned more than one or two highly-generic metrics (e.g., “Inadequate testing is a risk”). • Project management (and testers) are only aware of backward looking metrics (e.g., defects found and fixed) as opposed to forward looking metrics (e.g., residual defects remaining to be found). Recommendations: • Prepare: Provide testers and testing stakeholders with basic training in metrics with an emphasis on test metrics. • Enable: Incorporate a robust metrics program in the test plan that covers leading indicators. Emphasize the finding of important defects. • Perform: Consider using some of the following representative examples of useful testing metrics: number of defects found per test (test effectiveness metric) number of defects found per tester day (tester productivity metric) number of defects that slip through each verification milestone / inch pebble (e.g., 23 COQUALMO (COnstructiveQUALity Model is an estimation model that can be used for predicting the number of residual defects/KSLOC (thousands of source lines of code) or defects/FP (Function Point) in a software product. © 2012-2013 by Carnegie Mellon University Page 26 of 111
  • 27. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations reviews, inspections, tests) 24 20F estimated number of latent undiscovered defects remaining in the delivered system (e.g., estimated using COQUALMO) Regularly collect, analyze, and report an appropriate set of testing metrics. • Verify: Important: Evaluate and maintain visibility into the as-performed testing process to ensure that it does not become metrics-driven. Watch out for signs that testers worry more about looking good (e.g., by concentrating on only the defects that are easy to find) than on finding the most important defects. Verify that sufficient testing metrics are collected, analyzed, and reported. Related Problems: None 2.1.3.5 GEN-MGMT-5 Test Lessons Learned Ignored Description: Lessons that are learned regarding testing are not placed into practice. Potential Symptoms: • Management, the test teams, or customer representatives ignore lessons learned during previous projects or during the testing of previous increments of the system under test. Potential Consequences: • The test processes is not being continually improved. • The same problems continue to occur. • Customer representatives, managers, and developers have a false sense of security that the system functions properly. Potential Causes: • Lessons learned were not documented. • The capturing of lessons learned was being postponed until after the project was over when the people who have learned the lessons were no longer available, having scattered to new projects. • The only usage of lessons learned is informal and solely based on the experience that the individual developers and testers bring to new projects. • Lessons learned from previous projects are not reviewed before starting new projects. Recommendations: • Prepare: Make the documentation of lessons learned an explicit part of the testing process. Review previous lessons learned as an initial step in determining the testing process. • Enable: Capture (and implement) lessons learned as they are learned. 24 For example, what are the percentages of defects that manage to slip by architecture reviews, design reviews, implementation inspections, unit testing, integration testing, and system testingwithout being detected? © 2012-2013 by Carnegie Mellon University Page 27 of 111
  • 28. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Do not wait until a project postmortem when project staff member’s memories are fading and they are moving (have moved) on to their next project. • Perform: Incorporate previously learned testing lessons learned into the current testing process and test plans. • Verify: Verify that previously learned testing lessons learned have been incorporated into the current testing process and test plans. Verify that testing lessons learned are capture (and implemented) as they are learned. Related Problems:GEN-SIC-3 Lack of Stakeholder Commitment 2.1.4 Test Organization and Professionalism Problems The following testing problems are related to the test organization and the professionalism of the testers: • GEN-TOP-1 Lack of Independence • GEN-TOP-2 Unclear Testing Responsibilities • GEN-TOP-3 Inadequate Testing Expertise 2.1.4.1 GEN-TOP-1 Lack of Independence Description: The test organization or team lacks adequate independence to enable them to properly perform their testing tasks. Potential Symptoms: • The manager of the test organization reports to the development manager. • The lead of the project test team reports to the project manager. • The test organization manager or test team leader does not have sufficient authority to raise and manage testing-related risks. Potential Consequences: • A lack of sufficient independence forces the test organization or team to select an inappropriate test process or tool. • Members of the test organization or teamare intimidated into withholding objective and timely information from the testing stakeholders. • The test organization or team has insufficient budget and schedule to be effective. • The project manager inappropriately overrules or pressures the testers to violate their principles. Potential Causes: • Management does not see the value or need for independent reporting. • Management does not see the similarity between quality assurance and testing with regard to independence. © 2012-2013 by Carnegie Mellon University Page 28 of 111
  • 29. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Recommendations: • Prepare: Determine reporting structures Identify potential independence problems • Enable: Clarify to testing stakeholders (especially project management) the value of independent reporting for the test organization manager and project test team leader. • Perform: Ensure that the test organization or team has: Technical independence so that they can select the most appropriate test process and tools for the job Managerial independence so that they can provide objective and timely information about the test program and results without fear of intimidation due to business considerations or project-internal politics Financial independence so that their budget (and schedule) is sufficient to enable them to be effective and efficient Have the test organization manager report at the same or higher level as the development organization manager. Have the project test team leader report independently of the project manager to the test organization manager or equivalent (e.g., quality assurance manager). • Verify: Verify that the test organization manager reports at the same or higher level as the development organization manager. Verify that project test team leader report independently of the project manager to the test organization manager or equivalent (e.g., quality assurance manager). Related Problems:GEN-MGMT-2 Inappropriate External Pressures 2.1.4.2 GEN-TOP-2 Unclear Testing Responsibilities Description: The testing responsibilities are unclear. Potential Symptoms: • The test planning documents does not adequately address testing responsibilities in terms of which organizations, teams, and people: will perform which types of testing on what [types of] components are responsible for procuring, building, configuring, and maintaining the test environments are the ultimate decision makers regarding testing risks, test completion criteria, test completion, and the status/priority of defects Potential Consequences: • Certain tests are not performed, while other tests are performed redundantly by multiple organizations or people. © 2012-2013 by Carnegie Mellon University Page 29 of 111
  • 30. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Incomplete testing enables some defects to make it through testing and into the deployed system. • Redundant testing wastes test resources and cause testing deadlines to slip. Potential Causes: • The test plan template did not clearly address responsibilities. • The project team is very small with everyone wearing multiple hats and therefore performing testing on an as available / as needed basis. Recommendations: • Prepare: Obtain documents describing current testing responsibilities Identify potential testing responsibility problems (e.g., missing, vague responsibilities) • Enable: Obtain organizational agreement as to the testing responsibilities. • Perform: Clearly and completely document the responsibilities for testing in the test plans as well as the charters of the teams who will be performing the tests. Managers should clearly communicate these responsibilities to the relevant organizations and people. • Verify: Verify that testing responsibilities are clearly and completely documented in the test plans as well as the charters of the teams who will be performing the tests. Related Problems:GEN-TPS-2 Incomplete Test Planning, GEN-PRO-7 Too Immature for Testing, GEN-COM-2 Inadequate Test Documentation, TTS-SoS-3 Unclear SoS Testing Responsibilities 2.1.4.3 GEN-TOP-3 Inadequate Testing Expertise Description: Too many people have inadequate testing expertise, experience, and training. Potential Symptoms: • Testers and/or those who oversee them (e.g., managers and customer representatives) have inadequate testing expertise, experience, or training. • Developers who are not professional testers have been tasked to perform testing. • Little or no classroom or on-the-job training in testing has taken place. • Testing is ad hoc without any proper process. • Industry best practices are not followed. Potential Consequences: • Testing is not effective in detecting defects, especially the less obvious ones. • There areunusually large numbers of false positive and false negative test results. • The productivity of the testers is needlessly low. © 2012-2013 by Carnegie Mellon University Page 30 of 111
  • 31. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • There is a high probability that the system or software will be delivered late with an unacceptably large number of residual defects. • During development, managers, developers, and customer representatives have a false sense of security that the system functions properly. 25 21 F Potential Causes: • Management did not understand the scope and complexity of testing. • Management did not understand the required qualifications of a professional tester. • There was insufficient funding to hire fully qualified professional testers. • The project team is very small with everyone wearing multiple hats and therefore performing testing on an as available / as needed basis. • An agile development method is being followed that emphasizes cross functional development teams. Recommendations: • Prepare: Provide proper test processes including procedures, standards, guidelines, and templates for On-The-Job training. Ensure that the required qualifications of a professional tester are documented in the tester job description. • Enable: Convey the required qualifications of the different types of testers to those technically evaluating prospective testers. Provide appropriate amounts of test training (both classroom and on-the-job) for both testers and those overseeing testing. Ensure that the testers who will be automating testing have the necessary specialized expertise and training. 26 22F Obtain independent support for those overseeing testing. • Perform: Hire full time (i.e., professional) testers who have sufficient expertise and experience in testing. Use an independent test organization staffed with experienced trained testers for system/acceptance testing, whereby the head of this organization is at the same (or higher) level as the project manager. • Verify: Verify that those technically evaluating prospective testers understand the required qualifications of the different types of testers. Verify that the testers have adequate testing expertise, experience, and training. 25 This false sense of security is likely to be replaced by a sense of panic when the system begins to frequently fail operational testing or real-world usage after deployment. 26 Note that these recommendations apply, regardless of whether the project uses separate testing teams or cross functional teams including testers. © 2012-2013 by Carnegie Mellon University Page 31 of 111
  • 32. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Related Problems:GEN-MGMT-1 Inadequate Test Resources 2.1.5 Test ProcessProblems The following testing problems are related to the processes and techniques being used to perform testing: • GEN-PRO-1 Testing and Engineering Process not Integrated • GEN-PRO-2 One-Size-Fits-All Testing • GEN-PRO-3 Inadequate Test Prioritization • GEN-PRO-4 Functionality Testing Overemphasized • GEN-PRO-5 Black-boxSystem Testing Overemphasized • GEN-PRO-6 White-boxUnit and Integration Testing Overemphasized • GEN-PRO-7 Too Immature for Testing • GEN-PRO-8 Inadequate Test Evaluations • GEN-PRO-9 Inadequate Test Maintenance 2.1.5.1 GEN-PRO-1 Testing and Engineering Process Not Integrated Description: The testing process is not adequately integrated into the overall system/software engineering process. Potential Symptoms: • There is little or no discussion of testing in the system/software engineering documentation: System Engineering Master Plan (SEMP), Software Development Plan (SDP), Work Breakdown Structure (WBS), Project Master Schedule (PMS), and system/software development cycle (SDC). • All or most of the testing is being done as a completely independent activity performed by staff members who are not part of the project engineering team. • Testing istreated as a separate specialty-engineering activity with only limited interfaces with the primary engineering activities. • Testers are not included in the requirements teams, architecture teams, and any cross functional engineering teams. Potential Consequences: • There is inadequate communication between testers and other system/software engineers (e.g., requirements engineers, architects, designers, and implementers). • Few testing outsiders understand the scope, complexity, and importance of testing. • Testers do not understand the work being performed by other engineers. • There are incompatibilities between outputs and associated inputs at the interfaces between testers and other engineers. • Testing is less effective and takes longer than necessary. Potential Causes: • Testers are not involved in the determination and documentation of the overall engineering © 2012-2013 by Carnegie Mellon University Page 32 of 111
  • 33. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations process. • The people determining and documenting the overall engineering process do not have significant testing expertise, training, or experience. Recommendations: • Prepare: ObtainSEMP, SDP, WBS, and project master schedule • Enable: Provide a top-level briefing/training in testing to the chief system engineer, system architect, system/software process engineer. • Perform: Have test subject matter experts and project testers collaborate closely with the project chief engineer / technical lead and process engineer when they develop the engineering process descriptions and associated process documents. In addition to being in test plans such as the Test and Evaluation Master Plan (TEMP) or Software Test Plan (STP) as well as in other process documents, provide high-level overviews of testing in the SEMP(s) and SDP(s). Document how testing is integrated into the system/software development/life cycle, regardless of whether it is traditional waterfall, agile (iterative, incremental, and parallel), or anything in between. For example, document handover points in the development cycle when testing input and output work products are delivered from one project organization or group to another. Incorporate testing into the Project Master Schedule. Incorporate testing into the project’s work breakdown structure (WBS). • Verify: Verify that testing is incorporated into the project’s: system/software engineering process SEMP and SDP WBS PMS SDC Related Problems:GEN-COM-4 Inadequate Communication Concerning Testing 2.1.5.2 GEN-PRO-2 One-Size-Fits-All Testing Description: All testing is to be performed to the same level of rigor, regardless of its criticality. Potential Symptoms: • The test planning documents may contain only generic boilerplate rather than appropriate system-specific information. © 2012-2013 by Carnegie Mellon University Page 33 of 111
  • 34. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Mission-, safety-, and security-critical software may not be required to be tested more completely and rigorously than other less-critical software. • Only general techniques suitable for testing functional requirements/behavior may be documented; for example, there is no description of the special types of testing needed for quality requirements (e.g., availability, capacity, performance, reliability, robustness, safety, security, and usability requirements). Potential Consequences: • Mission-, safety-, and security-critical software may not be adequately tested. • When there are insufficient resources to adequately test all of the software, some of these limited resources may be misapplied to lower-priority software instead of being concentrated on the testing of more critical capabilities. • Some defects may not be found, and an unnecessary number of these defects may make it through testing and into the deployed system. • The system may not be sufficiently safe or secure. Potential Causes: • Test plan templates and content/format standards may be incomplete and may not address the impact of mission/safety/security criticality on testing. • Test engineers may not be familiar with the impact of safety and security on testing (e.g., the higher level of testing rigor required to achieve accreditation and certification. • Safety and security engineers may not have input into the test planning process. Recommendations: • Prepare: Provide training to those writing system/software development plans and system/software test plans concerning the need to include project-specific testing information including potential content Tailor the templates for test plans and development methods to address the need for project/system-specific information. • Enable: Update (if needed) the templates for test plans and development methods to address the type, completeness, and rigor • Perform: Address in the system/software test plans and system/software development plans: Difference in testing types/degrees of completeness and rigor, etc. as a function of mission/safety/security criticality. Specialty engineering testing methods and techniques for testing the quality requirements (e.g., penetration testing for security requirements). Test mission-, safety-, and security-critical software more completely and rigorously than other less-critical software. • Verify: Verify that the completeness, type, and rigor of testing: is addressed in the system/software development plans and system/software test © 2012-2013 by Carnegie Mellon University Page 34 of 111
  • 35. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations plans are a function of the criticality of the system/subsystem/software being tested are sufficient based on the degree of criticality of the system/subsystem/software being tested Related Problems:GEN-PRO-3 Inadequate Test Prioritization 2.1.5.3 GEN-PRO-3 Inadequate Test Prioritization Description: Testing is not being adequately prioritized. Potential Symptoms: • All types of testing may have the same priority. • All test cases for the system or one of its subsystems mayhave the same priority. • The most important tests of a given type may not be being performed first. • Testing may begin with the easy testing of “low-hanging fruit”. • Difficult testing or the testing of high risk functionality/components may be being postponed until late in the schedule. • Testing ignores the order of integration and delivery; for example, unit testing before integration before system testing and the testing of the functionality of current the current increment before the testing of future increments. 27 23 F Potential Consequences: • Limited testing resources may be wasted or ineffectively used. • Some of the most critical defects (in terms of failure consequences) may not be discovered until after the system/software is delivered and placed into operation. • Specifically, defects with mission, safety, and security ramifications may not be found. Potential Causes: • The system/software test plans and testing parts of the system/software development plans do not address the prioritization of the testing. • Any prioritization of testing is not used to schedule testing. • Evaluations of the individual testers and test teams: are based [totally] on number of tests performed per unit time ignore the importance of capabilities, subsystems, or defects found Recommendations: • Prepare: Update the following documents to address the prioritization of testing: system/software test plans testing parts of the system/software development plans 27 While the actual testing of future capabilities must wait until those capabilities are delivered to the testers, one can begin to develop black-box test cases based on requirements allocated to future builds (i.e., tests that are currently not needed and may never be needed if the associated requirements change or are deleted). © 2012-2013 by Carnegie Mellon University Page 35 of 111
  • 36. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations Define the different types and levels/categories of criticality • Enable: Perform a mission analysis to determine the mission-criticality of the different capabilities and subsystems Perform a safety (hazard) analysis to determine the safety-criticality of the different capabilities and subsystems Perform a security (threat) analysis to determine the safety-criticality of the different capabilities and subsystems • Perform: Work with the developers, management, and stakeholders to prioritize testing according to the: criticality (e.g., mission, safety, and security) of the system/subsystem/software being tested potential importance of the potential defects identified via test failure probability that the test is likely to elicit important failures potential level of risk incurred if the defects are not identified via test failure delivery schedules integration/dependency order Use prioritization of testing to schedule testing so that thehighest priority tests are tested first. Collect test metrics based on the number and importance of the defects found Base the performance evaluations of the individual testers and test teams on the test effectiveness (e.g., the number and importance of defects found) rather than merely on the number of tests written and performed. • Verify: Evaluate the system/software test plans and the testing parts of the system/software development plans to verify that they properly address test prioritization. Verify that mission, safety, and security analysis have been performed and the results are used to prioritize testing. Verify that testing is properly prioritized. Verify that testing is in fact being performed in accordance with the prioritization. Verify that testing metrics address test prioritization. Verify that performance evaluations are based on Related Problems:GEN-PRO-2 One-Size-Fits-All Testing 2.1.5.4 GEN-PRO-4 Functionality Testing Overemphasized Description: There is an over emphasis on testing functionality as opposed to quality characteristics, data, and interfaces. Potential Symptoms: © 2012-2013 by Carnegie Mellon University Page 36 of 111
  • 37. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • The vast majority of testing may be concerned with verifying functional behavior. • Little unit or testing may be being performed to verify adequate levels of the quality characteristics (e.g., availability, reliability, robustness, safety, security, and usability). • Inadequate levels of various quality characteristics and their attributes are onlybeing recognized after the system has been delivered and placed into operation. Potential Consequences: • The system may not have adequate levels of important quality characteristics and thereby fail to meet all of its quality requirements. • Failures to meet data and interface requirements (e.g., due to a lack of verification of input data and message contents) may not be recognized until late during integration or after delivery. • Testers and developers may have a harder time localizing the defects that the system tests reveal. • The system or software may be delivered late and fail to meet an unacceptably large number of non-functional requirements. Potential Causes: • The test plans and process documents do not adequately address the testing of non- functional requirements. • There are no process requirements (e.g., in the development contract) mandating the specialized testing of non-functional requirements. • Managers, developers, and or testers believe: Testing other types of requirements (i.e., data, interface, quality, and architecture/design/implementation/configuration constraints) is too hard. Testing the non-functional requirements will take too long.28 The non-functional requirements are not as important as the functional requirements. Testing the non-functional testing will naturally occur as a byproduct of the testing of the functional requirements.29 • The other types of requirements (especially quality requirements) are: poorly specified (e.g., “The system shall be secure.” or “The system shall be easy to use.”) not specified therefore not testable • Functional testing may be the only testing that is mandated by the development contract and therefore the testing of the non-functional requirements is out of scope or unimportant to the acquisition organization. Recommendations: 28 Note that adequately testing quality requirements requires significantly more time to prepare for and perform that typical functional requirements. 29 Note that this can be largely true for some of the non-functional requirements (e.g., interface requirements and performance requirements). © 2012-2013 by Carnegie Mellon University Page 37 of 111
  • 38. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations • Prepare: Adequately address the testing of non-functional requirements in the test plans and process documents. Include process requirements mandating the specialized testing of non-functional requirements in the contract. • Enable: Ensure that managers, developers, and or testers understand the importance of testing non-functional requirements as well as conformance to the architecture and design (e.g., via whitebox testing). • Perform: Adequately perform the other types of testing. • Verify: Verify that the managers, developers, and or testers understand the importance of testing non-functional requirements and conformance to the architecture and design. Have quality engineers verify that the testers are testing the quality, data, and interface requirements as well as the architecture/design/implementation/configuration constraints. Review the test plans and process documents to ensure that they adequately address the testing of non-functional behavior. Measure, analyze, and report the types of non-functional defects and when they are being detected. Related Problems: None 2.1.5.5 GEN-PRO-5Black-boxSystem Testing Overemphasized Description: There is an over emphasis on black-box system testing for requirements conformance. Potential Symptoms: • The vast majority of testing isoccurring at the system level for purposes of verifying conformance to requirements. • There is very little white-box unit and integration testing. • System testing is detecting many defects that could have been more easily identified during unit or integration testing. • Similar residual defects may also be causing faults and failures after the system has been delivered and placed into operation. Potential Consequences: • Defects that could have been found during unit or integration testing are harder to detect, localize, analyze, and fix. • System testing is unlikely to be completed on schedule. • It is harder to develop sufficient system-level tests to meet code coverage criteria. • The system or software may be delivered late with an unacceptably large number of © 2012-2013 by Carnegie Mellon University Page 38 of 111
  • 39. Common Testing Problems: Pitfalls to Prevent and Mitigate 25 January 2013 Descriptions, Symptoms, Consequences, Causes, and Recommendations residual defects that will only rarely be executed and thereby cause faults or failures. Potential Causes: • The test plans and process documents do not adequately address unit and integration testing. • There are no process requirements (e.g., in the development contract) mandating unit and integration testing. • The developers believe that blackbox system test is all that is necessary to detect the defects. • Developers believe that testing is totally the responsibility of the independent test team, which is only planning on performing system-level testing. • The schedule does not contain adequate time for unit and integration testing. Note that this may really be an under emphasis of unit and integration testing rather than an overemphasis on system testing. • Independent testers rather than developers are performing the testing. Recommendations: • Prepare: Adequately address in the test plans, test process documents, and contract: whitebox and graybox testing unit and integration testing • Enable: Ensure that managers, developers, and or testers understand the importance these lower- level types of testing. Use a test plan template or content and format standard that addresses these lower-level types of testing. • Perform: Increase the amount and effectiveness of these lower-level types of testing. • Verify: Review the test plans and process documents to ensure that they adequately address these lower-level types of tests. Verify that the managers, developers, and or testers understand the importance of these lower-level types of testing. Have quality engineers verify that the testers are actually performing these lower-level types of testing and at an appropriate percentage of total tests. Review the test plans and process documents to ensure that they adequately address lower-level testing. Measure the number of defects slipping past unit and integration testing. Related Problems:GEN-PRO-6White-box Unit and Integration Testing Overemphasized © 2012-2013 by Carnegie Mellon University Page 39 of 111