Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Test Strategy Document Template

71 visualizaciones

Publicado el

This Slideshare presentation is a partial preview of the full business document. To view and download the full document, please go here:
https://flevy.com/browse/business-document/test-strategy-document-template-3787

DOCUMENT DESCRIPTION

This document is a very comprehensive test strategy template. It consists of some 52 pages that not only provides a structure for the sections required but provides sample content for each section extracted from a normalised client example.

Sections covered are Requirements, test approach, test types, agile testing, procedures and management, activities, resources. Each section has comprehensive sub sections which detail out the various aspects of establishing a test strategy for a project, programme or indeed for an organisation.

Publicado en: Empresariales
  • Hi everyone, You can download the full document here:http://flevy.com/browse/business-document/test-strategy-document-template-3787
       Responder 
    ¿Estás seguro?    No
    Tu mensaje aparecerá aquí
  • Sé el primero en recomendar esto

Test Strategy Document Template

  1. 1. R. Bradley Consulting| Testing Strategy Document 1 R. Bradley Consulting Testing Strategy Document Guidance / Template Project XXX Revision: 0.1: Draft Published: date
  2. 2. R. Bradley Consulting| Testing Strategy Document 4 3.6.5 Responsibility .................................................................................. 20 3.6.6 Entry Criteria ................................................................................... 20 3.6.7 Exit Criteria ..................................................................................... 21 3.7 User Acceptance Testing.......................................................................... 21 3.7.1 Test Objectives................................................................................. 21 3.7.2 Test Environment............................................................................. 22 3.7.3 Subject of Test ................................................................................. 22 3.7.4 Test Input......................................................................................... 22 3.7.5 Responsibility .................................................................................. 22 3.7.6 Entry Criteria ................................................................................... 22 3.7.7 Exit Criteria ..................................................................................... 22 3.8 Operational Acceptance Testing............................................................... 23 3.8.1 Test Objectives................................................................................. 23 3.8.2 Test Environment............................................................................. 23 3.8.3 Subject of Test ................................................................................. 23 3.8.4 Test Input......................................................................................... 23 3.8.5 Responsibility .................................................................................. 23 4 Test Types.............................................................................................23 4.1 Regression Testing................................................................................... 23 4.1.1 Test Objectives................................................................................. 23 4.1.2 Test Environment............................................................................. 24 4.1.3 Subject of Test ................................................................................. 24 4.1.4 Test Input......................................................................................... 24 4.1.5 Entry Criteria ................................................................................... 24 4.1.6 Exit Criteria ..................................................................................... 24 4.2 Performance Testing................................................................................ 24 4.2.1 Test Objectives................................................................................. 25 4.2.2 Performance Testing Environment ................................................... 25 4.3 Security Testing....................................................................................... 25 4.3.1 Test Objectives................................................................................. 25 4.3.2 Security testing environment ............................................................ 26 4.4 Accessibility Testing................................................................................ 26 4.4.1 Test Objectives................................................................................. 26 4.4.2 Accessibility Environment ............................................................... 26 4.5 Usability Testing...................................................................................... 26 4.5.1 Test Objectives................................................................................. 26 4.5.2 Usability Environment ..................................................................... 26 4.6 Disaster Recovery Testing ....................................................................... 26 4.6.1 Test Objectives................................................................................. 26 4.6.2 Disaster Recovery Environment ....................................................... 26 5 Agile testing ..........................................................................................28 5.1 Approach to Testing................................................................................. 31 5.2 Sprint testing............................................................................................ 31 5.3 Factory Acceptance testing ...................................................................... 32 5.4 Site Acceptance Testing........................................................................... 33 5.5 User Acceptance Testing.......................................................................... 33 6 Testing Procedures & Test Management............................................34 6.1 Testing Management................................................................................ 34 6.2 Test Monitoring and Control.................................................................... 34 6.2.1 Progress Reporting........................................................................... 34 This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  3. 3. R. Bradley Consulting| Testing Strategy Document 7 1 Introduction 1.1 Background <Describe the background to the project that the test strategy / approach refers to> 1.2 Objectives < Describe the objectives for the test strategy. Sample content: This document provides a high level view of the testing approach being taken for this project. Full details of the testing carried out within each Test Phase will be provided in individual Test Plans. The Objectives of this Test Strategy document are to ensure that:- • The Infrastructure and the Applications solution are each tested. • All phases (Elaboration, Construction and Transition) of the project are included appropriately in the testing; • Appropriate testing is performed throughout the project lifecycle which in turn underpins the quality and reliability of the xxxxxxx solution; • Testing can be carried out consistently, with minimal duplication of effort, by defining what each test phase delivers; • Testing can be effectively managed and controlled; • Dependencies are recognised and addressed; • Responsibilities are set, understood and agreed by all parties who have a responsibility for testing; • The test team members understand their roles and the tasks to be undertaken during the testing; • The appropriate deliverables to be produced are defined e.g. individual Test Plans for System Testing, System Integration Testing and User Acceptance Testing • Testing artefacts to be produced in the Elaboration and Construction phases are defined; • Test activities are as comprehensive as is practical with no gaps or omissions in the Test Plans. DIAG. 1. Showing testing phases This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  4. 4. R. Bradley Consulting| Testing Strategy Document 10 Time and Delivery • Any delays in the delivery of hardware, software, documentation (including sign-off), code or data will have an impact on the Project’s ability to meet Project milestones Quality All phases of application development to be in accordance with the suppliers standard development methodology <state methodology> hence the quality method is that associated to the development control approach.> This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  5. 5. R. Bradley Consulting| Testing Strategy Document 13 2 Requirements to Test <Before considering the test strategy (i.e. how we are to test the application) some consideration must first be given to what is to be tested (i.e. the requirements for testing). The following sections consider the functional requirements that will be input to the testing process. This section identifies the ‘test inputs’ – which the test cases etc will be based upon. These are the functional requirements as defined and agreed within the xxxxxxxxxx Project Vision document, and further detailed in the Use Cases or user stories based on the requirements capture method being employed.> 2.1 Functional Requirements 2.1.1 In Scope Sample content; The following Use Cases will be in each release and Development Cycle:- Release 0 Use Case Cycle UC01 Create or Change Content Item 1,2,3 UC03 Approve Content Item 2,3 UC13 View and Manage Tasks 2,3 UC14 Run a Report 2,3 UC16 Monitor Item Dates 3 UC22 Search for Content Items 3 UC23 Import Controlled Vocabulary 2 UC24 Manage Content Architecture 1 UC25 Manage Dictionary 2 UC39 Manage Digital Asset Bank 3 UC40 Manage Users 3 UC43 Manage Picklists 1 UC68 Manage Login 3 UC71 Modify Profile Details 3 2.2 Non-Functional Requirements This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  6. 6. R. Bradley Consulting| Testing Strategy Document 16 For the Milestone Plan for Development and Infrastructure Implementation and Test see Appendix A. 3.2 Testing of the Releases There are xx development cycles planned. The software builds from all development cycles will be deployed into the System Test environment for system testing. The testing lifecycle will follow a traditional method, that is: 1. The Development team will finish their testing and will deploy into the System Test environment for the Test Team run System Test. 2. The release will then be deployed to Integration Test in xxxxxx, to allow the Test Team to perform Integration testing. 3. It will then be deployed into the pre-production environment where UAT/Performance testing and any other outstanding test phases i.e OAT, Performance etc… will be tested. Once all testing has completed, it will be deployed into the Production environment. The movement between the Test Phases will be controlled by a Standard Quality Gate process. Throughout development the users will be able to view the functionality being developed. This is intended to be ‘ad hoc’ as the needs of the users and the development team allow. At the end of each iteration there will be a formal hands-on business review where the functionality will be demonstrated to the wider stakeholder groups. As soon as a release becomes available, testing will commence, consisting of: • Execution of the Test cases against the functionality specified in the Use Cases • Retesting tests including high & medium risk test cases from the base build and previous iterative releases 3.3 Unit Testing 3.3.1 Test Objectives Unit testing has two main objectives: • Verify the implementation of the design for one software element • Ensure that the program logic is complete and correct and that the unit works as designed in the program specification Unit test planning and execution will be completed by the development team who will focus on the verifying that the components adhere to the design specifications. For the xxxxxxxxxx Programme the development team build unit tests into the solution. These tests cover functionality and build issues. These tests are intended for This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  7. 7. R. Bradley Consulting| Testing Strategy Document 19 3.5 System Testing 3.5.1 Test Objectives The system will then be System tested against all of the Use Cases and other artefacts referenced in a previous section, to prove that the functionality of the new system works as specified. The aim is to simulate business processes and cycles through the overall system by ‘black box’ testing as opposed to code paths through individual units, which is covered by unit testing. System testing is usually the most extensive testing process in terms of depth and coverage. System Test Cases will be prioritised using a risk rating of H, M or L (see Sections 5.5.5 and 5.5.6) in order to ensure that the most critical tests are run first and most frequently (e.g. in regression tests). The depth of testing will be defined in the test plan. The test coverage will include: • Functional tests in particular for those areas which cover customisations or new web parts and/or are perceived to be medium to high risk; • Low risk will be included if time will allow • A test of process paths through the system, testing both valid and invalid data combinations, using business process scenarios where possible; • A representative sample of business data (where possible); • The validation checks of screens and processes where the validation spans more than one. 3.5.2 Test Environment System Testing will be completed on a System Test environment. This environment will be hosted on the development network. The environment will be subject to Configuration Management as described in Section 5.6. 3.5.3 Subject of Test The item being tested is a software release or a configuration change of xxxxxxxxxx under configuration control, which has been delivered. 3.5.4 Test Input • Use Cases • The Use Case Model and end to end Business Process document • The Wireframes (for functional visibility) 3.5.5 Responsibility The system test team will carry out System testing. 3.5.6 Entry Criteria This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  8. 8. R. Bradley Consulting| Testing Strategy Document 22 • Plan the tests so that the input data, the processing and the outputs are fully understood and documented. It is not the objective of UAT to prove that a deliverable is totally free of failures or test incidents in accordance with the exit criteria. 3.7.2 Test Environment UAT will take place in Pre-Production. 3.7.3 Subject of Test The item being tested is a software release of the xxxxxxxxxx application under configuration control, which has been delivered, accompanied by a release note. 3.7.4 Test Input UAT test scripts based on business scenarios built around the Functional requirements 3.7.5 Responsibility The MSE test team will plan and carry out UAT testing. 3.7.6 Entry Criteria • System test and Integration testing completed, meeting their exit criteria • Training and familiarisation of UAT testers with the xxxxxxxxxx application completed in order to avoid UAT becoming a training exercise • UAT test cases planned and documented by UAT testers and reviewed and signed off by the <the developer> Test Manager 3.7.7 Exit Criteria • Testing against agreed requirements and acceptance criteria successfully completed • Severities on outstanding test incidents reviewed and agreed between MSE and <the developer> • A list of outstanding test incidents agreed with user representatives which can be in the initial version of the implemented system (principally expected to be ‘Low’ or below) • 100% of High risk tests completed, 50% of medium risk tests completed • Low risk tests completed in the remaining time frame or deferred to a later phase. • Zero Severity 1 (‘Critical’) or Severity 2 (‘High’) Test Incidents outstanding • Up to 5 Severity 3 (‘Medium’) Test Incidents outstanding • Up to 10 Severity 4 (‘Low’) Test Incidents outstanding • Up to 20 Severity 5 (‘Cosmetic’) Test Incidents outstanding • No outstanding Test Incidents with “High” business Impact This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  9. 9. R. Bradley Consulting| Testing Strategy Document 25 4.2.1 Test Objectives Any performance testing required will be executed in line with the Performance Testing Strategy and can encompass a number of different types of performance test as described below. Performance testing will take place on the Pre-production environment as required by MSE. The toolset to be used is still the subject of discussion and agreement. Detailed management for performance testing needs to be defined and agreed with the MSE this will be detailed in the Performance Test Plan. A Workload Analysis Document / performance test requirement of spec are required to be defined and agreed to enable the tests to be developed. Load tests will subject the application to varying workloads to measure and evaluate the performance behaviours and ability of the system to continue to function properly under different specified workloads. These tests will include using a mix of transactions, all running simultaneously with a specified number of virtual users running the mix of transactions – the mix to be representative of the production target transaction mix. A specific Load test may be developed to prove the RPS (Requests Per Second). This test will be calculated by taking a number of users and functions and calculating what transactions they need to perform to represent an RPS of 69. This then provide a benchmark for performance testing. Where performance testing identifies a performance issue in an ‘external’ subsystem which is outside the scope of the project then this will be raised with client name / support team or whoever is appropriate. Performance testing will be based on utilising representative data volumes supplied by MSE. 4.2.2 Performance Testing Environment Performance System testing must be completed on the Pre-Production which is a replica of the proposed production environment and will therefore give realistic results. (See section 8). 4.3 Security Testing 4.3.1 Test Objectives The objective of Security Testing will be to verifying that user profile roles and responsibilities are correctly implemented to control access to areas of the application. It will also test against allowing anonymous access to the Portal. This will be defined in the Security requirements currently being defined by the Security Team. This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  10. 10. R. Bradley Consulting| Testing Strategy Document 28 5 Agile testing <as noted earlier there are probably differences in the selection of testing stages and approaches for projects that are adopting a full Agile style of development. Agile development approaches imply some delta to the traditional approaches listed above although in truth this is more about how and when to apply the test stages rather than the definition of the test stage itself. Agile approaches tend to require the flexible agreement of the team members in the decision of the testing strategy to be adopted. Thus it is difficult to be specific or prescriptive on strategically what must or should be used. The following framework is often used to guide development teams in the agreement of what, how and when they will apply the differing test stages and approaches. The quadrant numbering system does NOT imply any order as does not instruct teams to work through the quadrants from 1 to 4, in a waterfall style. The numbering is just arbitrary and used as a reference only. It is expected that most projects would start with Q2 tests, because those are the most aligned to the user story driven development approach and where specific examples This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  11. 11. R. Bradley Consulting| Testing Strategy Document 31 Sample content; 5.1 Approach to Testing A number of different forms of testing will take place on the xxxxxxxx project. Some will take place within the blended sprint team while others will focus on independent readiness and acceptance testing and will be owned and executed by xxxxxxxx. We have outlined the different test phases in the various sections below but it is important to note that each test phase will flow into another over the delivery lifecycle for each of the xx number of Iterations, once key acceptance criteria have been met as outlined in the Acceptance Measures. 5.2 Sprint testing Sprint testing for the xxxxxxx project will take place within the feature team during a sprint. It will be made up of two different aspects: • Code-level Testing - Unit Testing and Unit Integration Testing. • System Testing - to demonstrate that the application meets the specified requirements in the user stories. In Inception (Iteration 0) a Test Plan will be produced to cover all aspects of the approach, responsibilities and plans for testing for this project. Unit Testing will be performed in order to test that the individual program units meet their technical design specifications. On this project, this would take place during the development of the user stories and be performed predominantly by the developers as part of Test Driven Development (TDD). Unit Integration Testing will be performed after Unit Testing to test that the individual components of the xxxxxxxx application are integrated and tested together as a whole. The intention is to verify that the data content and application control can This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  12. 12. R. Bradley Consulting| Testing Strategy Document 34 6 Testing Procedures & Test Management <this section identifies the manner in which testing will be controlled and managed. It should identify who, how and what of the test execution processes> Sample content; 6.1 Testing Management Testing will be managed using manual Test Specifications in Microsoft Excel that will incorporate Test Cases/Scripts for each Use Case (All Test Design will be included in these documents). Test Execution will take place using the Scripts created in the Test Specification Documents. Tracking of Test Execution will be as detailed below. All test incidents will be maintained using a tool such as SourceForge Defect Tracker and will reference the relevant Test Script. When Test Cases/Scripts have been executed the results will be tracked/recorded on a spreadsheet that will be maintained by the test team that will show whether Cases/Scripts are Passed or Failed, when the script was last run, as well as reference details to build and Use Case. 6.2 Test Monitoring and Control 6.2.1 Progress Reporting The primary means of test progress reporting will be: • A measure of executed test cases against those planned • A measure of test incidents outstanding against the exit criteria • A measure of test cases passed against test cases failed Progress will be reviewed at least once a week but at critical points this will be more frequent (e.g. on a daily basis) A Test Progress report will be completed each week on Monday (for the previous week) and sent via the PMO to the xxxxxxxxxx Platform Delivery Project Manager, stream leads and relevant members of the xxxxxxxxxx Programme Team as appropriate. This consists of: • Achievements this week • Planned activities next week • Progress against milestones • Risks & Issues • Other testing stats as agreed with the xxxxxxxxxx Platform Delivery Project Manager. This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  13. 13. R. Bradley Consulting| Testing Strategy Document 37 This section outlines a common procedure that can be followed or used as a starting point when a defect is found during testing. 6.5.1 Roles and Responsibilities Role Responsibilities Tester A tester is anyone who detects a defect in the Application Under Test system. The tester is responsible for describing the new defect as accurately and completely as possible and for making and initial assessment of severity and priority. Defect Manager (Test Manager) The Defect Manager is responsible for managing the hand over of defects to the fixing resource for resolution and for managing the process of verifying the resolution once the fixing resource has completed the work. He / She will also chair the Defect Steering Group. Defect Steering Group The Defect Steering Group will be responsible for deciding if it is a defect and assigning Severity and Priorities to defects. Depending on the numbers of defects raised the Group should meet initially on a daily basis, this may become less frequent as the level of defects becomes clear. Business Analyst / Business Lead Business Analyst / Business leads will form part of the Defect Steering Group. Defect Fixer Defect Fixers are responsible to the Defect Manager / Project Manager for making code / configuration corrections and performing appropriate Unit / Link tests to show the defect has been removed. Release Manager The Release Manager is responsible for transferring corrected defects to required test environment as part of a scheduled release. 6.5.2 Test Incident Lifecycle The diagram below shows the various states a Defect may pass through from when it is initially detected until a correction is applied to the required test environment. Each state corresponds to a Defect Status in the defect management tool. This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  14. 14. R. Bradley Consulting| Testing Strategy Document 40 Severity Description 1 – Critical (Critical) A severe error which prevents any testing or usage of the system. For example the SQL server is down killing all application system services thus preventing access to the System 2 – High (Major) A major error which prevents the testing or usage of a significant part of the system. For example the search engine is down preventing access to the search functionality. 3–Medium (Average) Significant error - but a reasonable work-around is possible and testing can continue 4 – Low (Minor) Minor error - does not seriously impact functionality and testing can continue 5 – Cosmetic (Enhancement) Cosmetic error or documentation problem - with little or no impact on testing or usage 6.5.6 Incident Priority Priority indicates the priority with which the fix is required. Priority Description 1 – Resolve Immediately All testing has stopped 2 – Give High Attention Progress being delayed substantially 3 – Normal Queue Significant inconvenience to project 4 - Low Needs to be done, may be deferred to a later release 6.5.7 Repositories By definition the managing of defects requires a repository to store the details of individual defects. The following fields must be available in the defect management tool. Field Usage Defect Number System generated Status Defect Status i.e. Open, Assigned, Fixed etc. Headline Brief Description Project xxxxxxxxxx + Workstream Upgrade Severity Defect Severity 1 to 5 Priority Defect Priority 1 to 4 Raised By Name of person raising defect Description Description of the defect Owner Who is currently responsible for this defect Date Raised System Generated Date Closed Date Closed Attachments Add screen shots of problem This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  15. 15. R. Bradley Consulting| Testing Strategy Document 43 The environment owners reserve the right to refuse configuration changes to their environments if the necessary release notes are not available or do not contain sufficient information. A configuration management tool will be. The Configuration Management and Release Manager will produce a separate Configuration Management Plan detailing the Configuration Management processes being used for this Project. 6.6.1 Change Control process A contingency process for changes will be defined to determine: • Whether the change is acceptable? • To assess the change based on Risk • What environments will be affected by the change (whether the change will go through all environments) Any change after Continuous Build will be subject to a <the developer> review and/or the MSE Change Control Board. 6.6.2 Software Releases into Test Environments Software releases into the controlled environments for System Testing, System Integration Testing, UAT, and Performance Testing will only be made with the agreement of the environment owner (see Section 9.3). The release will be maintained under configuration control, and accompanied by a Release Note which identifies: • Software build name and number • Functions / use cases in the release • Test incidents corrected in the release • Notes This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  16. 16. R. Bradley Consulting| Testing Strategy Document 46 instances it may be appropriate to identify generic test data (e.g. ‘enter a date within the next 7 working days’, or ‘select any valid ticket’). 7.4 Test Execution The purpose of the Execution stage is to perform the activities as guided by the test schedule and test cases to verify that the application and infrastructure perform as expected. Each day of testing will progress in the following manner: 1. Each Tester will be assigned a number of test cases to execute. 2. Testers will follow the test cases, recording in the test log the variances between the actual results and the test case expected results. 3. Testers will document the execution of each step in the test case by describing variances between actual results and expected results and storing the test logs in the test management tool. In this way the Testers will provide the Test Manager with a status on the following: • Number of test cases/scripts executed/re-executed • Number of test cases/scripts passed/failed/to be executed • Number of new test incidents with resolution detail • Any other issues 4. The Test Manager will review the test incidents raised during testing and make initial diagnosis to ensure data completeness, clear identification of problem and uniqueness of issue. 5. The Test Manager will assign the reported test incident to the appropriate team lead for investigation and potential resolution if necessary. 7.5 Test Completion When each test execution phase has completed the Test Manager will produce a Completion Report, which contains: • A summary of the execution phase, • Any deviations from the Test Plan, • Entry & Exit Criteria, • Lost Time, Lessons Learnt, • Deliverables and test incidents 7.6 Testing Deliverables The following are the test deliverables / documentation that will be produced during testing. 7.6.1Testing Planning Deliverables This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  17. 17. R. Bradley Consulting| Testing Strategy Document 49 Role Responsibility Name Test Manager • Overall responsibility for testing on the xxxxxxxxxx Programme; • Direct the testing on a day to day basis to ensure that it is focused on delivering its objectives; • Plan and manage the test team on a day to day basis; • Ensure that test plans are adequate for the test team to complete the testing activities within the necessary timescales and costs; • Determine the resource needs and manage them; • Assess, monitor and proactively manage issues and risks throughout the testing; • Escalating issues & risks affecting progress to the Project Manager; • Creating the Test Strategy and Test Plan; • Reviewing test design documentation; • Defining Entry & Exit Criteria into and out of all phases of testing; • Reviewing and tracking any incidents during the testing if necessary; • Organising daily test incident activities; • Managing of System Test Team (Preparation & Execution); • Creating or Reviewing the Test Completion Reports; • Reporting progress to the Project Manager. • Creating Test Completion Report for the System Test. • Reviewing test scripts and data for the System Test; Will Tully Test Analysts (x2) • Holding meetings / liaise with Business Analysts; • Creating Functional and Non-Functional test scripts for each of the test stages; • Executing Test scripts for each of the test stages; • Recording actual test results against expected results • Raising incidents to a high standard; • Reviewing failed Test Scripts; • Reviewing and tracking any test incidents during the testing; • Executing retests and regression testing if necessary; • Escalating issues affecting progress to the Test Manager; • Reporting progress to the Test Manager; • Maintaining deliverables from System Test; Performance Test Analyst (TBC) • Configure Performance Test Tool; • Create Performance Test scripts; • Execute and analyse Performance Tests; • Communicating performance test results or issues to the Test Manager. TBCThis document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  18. 18. R. Bradley Consulting| Testing Strategy Document 52 Glossary Terminology Description Deliverable The output of a task or activity Test Strategy The objectives and strategy to be used for testing, the procedures and tools to be used and the reporting procedures Test Plan A description of the scope of testing for an iteration / phase, the resources required, the activities and the outputs Performance Test Plan Document identifying the performance and load testing to be performed Test Procedure A set of instructions for the set up, execution and evaluation of test cases (Manual) Test Script A set of inputs, execution conditions and expected results Test Results A repository of data captured during execution of tests Test Report A presentation of the test results and an assessment of the tests and recommendations for future tests. Requirements / Specification Test Review based testing of the functional and non functional specifications to verify that they contain all of the information required in order to define the tests and expected results. Unit Test Testing of an individual software component System Test Testing of the functional requirements of the system under test using black box techniques System Integration Test Testing of the interfaces between the system under test and other pre-existing external systems or Testing of the system which is functionally complete Functional Test Tests based on the business functionality of the application Non Functional Test Tests of those requirements which do not relate to functionality (i.e. the following) Performance Test Testing of the application against specified performance, and load requirements Volume Test Testing that the application can handle large volumes of data Stress Test Testing of the ability of the application to perform beyond the limits of the specified requirements Load Test Testing of the performance of the application under varying operations conditions such as number of users, transactions etc Failover/Recov ery Testing Testing the ease with which the application handles and recovers from failure scenarios such as power failures, comms failures etc. Usability Tests Testing focussed on the ease with which the users can learn and use an application Security Test Testing that the application meets the specified security requirements. This document is a partial preview. Full document download can be found on Flevy: http://flevy.com/browse/document/test-strategy-document-template-3787
  19. 19. 1 Flevy (www.flevy.com) is the marketplace for premium documents. These documents can range from Business Frameworks to Financial Models to PowerPoint Templates. Flevy was founded under the principle that companies waste a lot of time and money recreating the same foundational business documents. Our vision is for Flevy to become a comprehensive knowledge base of business documents. All organizations, from startups to large enterprises, can use Flevy— whether it's to jumpstart projects, to find reference or comparison materials, or just to learn. Contact Us Please contact us with any questions you may have about our company. • General Inquiries support@flevy.com • Media/PR press@flevy.com • Billing billing@flevy.com

×