Gain a deeper understand to the strategy and design approaches to automation frameworks. Warning: One size does not fit all! Call Utopia (630) 566-4722 to learn more.
11. Separation of Test Definition and Test Execution Test Engine Software Testing and Functional Subject Matter Experts Automation Experts Test Definition Interface Test Scenario Input Test Scenario Results Reusable Scripts/ Modules Utility Functions
17. Business Process Framework Test Engine Login, test_user_01, password_01 Create_Order, <ord1>,SKU10045,100,… Create_Order, <ord2>,SKU10045,100,… Ship_Order,<ord1>,… Ship_Order,<ord2>,… Verify_Inventory,SKU10045,… Test Scenario Files Test Scenario Results Business Process Scripts Utility Functions
36. Distilled Test Cases Mapped To Frameworks Automated Testing Automated Test Suite Business Process Testing Framework User Interface Testing Framework Input Validation Testing Framework Business Process User Interface Input Validation
40. Multi-Platform Framework Conceptual Design Test Definition Interface Test Engine Desktop Execution Framework Hand-held Execution Framework Functional SME’s Automation Engineers Test Scenario Input Test Scenario Results
41.
42. Integrated BP/KW Framework Test Scenario File VerifyState, login_page, EXISTS SetText, user_id, <user_id> SetTextSecure, password, <pw> ClickButton, submit Test Engine Login, test_user_01, password_01 Create_Order, <ord1>,SKU10045,100,… Create_Order, <ord2>,SKU10045,100,… … Business Process Templates Test Scenario Results Object/ Action Functions Utility Functions
43.
44. Translate BP’s to Keyword Steps Test Scenario Input File Business Process Template
45. Test Driver Get BP Instruction Login, test01, test01, warehouse3 Load BP Template Map BP Data to BP Steps Execute BP Steps VerifyState, Login Page, EXISTS SetText, User ID, <user_id> SetText, Password, <pw> SetText, Warehouse, <warehouse> ClickButton, Submit VerifyState, Login Page, EXISTS SetText, User ID, test01 SetText, Password, test01 SetText, Warehouse, warehouse3 ClickButton, Submit
46.
47.
48.
49.
50. Sample Summary Result File ACCUMULATED TOTALS PERCENTAGE ------------------------------------------ STEPS EXECUTED: 14 TEST CASES EXECUTED: 5 TEST CASES PASSED: 3 60.0 TEST CASES FAILED: 1 20.0 TEST CASES INCOMPLETE: 1 20.0 SCRIPT/APPLICATION ERRORS: 3 TEST CASE RESULT --------------------------------------------- TC0001.........................PASS TC0002.........................PASS TC0003.........................PASS TC0004.........................INCOMPLETE TC0005.........................FAIL Summarized Test Case Results Accumulated Execution Metrics
51. Sample Detail Log File BEGIN STEP: BP0001 START TIME: 14:46:48 S0001 Verify_State.............OK S0002 Set_Text.................OK S0003 Set_Text.................OK S0004 Set_Text.................OK S0005 Click_Button.............OK BP0001.........................OK END STEP: BP0001 END TIME: 14:48:04 BEGIN STEP: BP0002 START TIME: 14:48:06 S0001 TSL error handler invoked S0001 Err: -10011, function: set_window S0001 Verify_State............FAIL One or more script errors were detected BP0002........................FAIL END STEP: BP0002 END TIME: 14:50:21 Page 1 Page 2
52. Results Logging/Metrics Functions Test Driver Keyword Functions CreateLogFiles StepBegin StepEnd Writes step start time to log files Writes step status, end time to log files, input file, metric totals & other mediums Creates log files LogMetrics Write accumulated test execution metrics to log files Writes detailed step execution status to log files StepDetail Logs component usage LogUsage
53.
Notas del editor
Structured testing experience: Created test plans, strategies, scripts, manual testing, results analysis, etc. Exposure to test tool: Has worked with (or at least seen) one of the contemporary tools and is familiar with their high-level capabilities such as object recognition, programming capabilities, etc.
Also called – architecture Used the word “code” intentionally. The test engine can be built utilizing one of the commercially available tools, or purchased via one of the
Or another way of saying “Can’t I just hit the record button?” Reliable (executes to completion with accurate results) Maintainable (test suite maintenance can be performed within available time windows while maintaining positive ROI) Scalable (test coverage can be expanded efficiently – with existing test resources while maintaining positive ROI)
This is pretty well accepted – whatever your approach, it better be modular and data-driven (which is a general term for test suite execution being governed at some level by an external data source)
In my experience the leading cause of automation failure is attempting to implement with resources that don’t have the skills and background. What’s more, you end up taking focus away from their strengths – software testing and domain expertise. Separating these two functions allows everyone involved to focus on their strengths If test def & execution are all rolled up into the same code, additional test cases must be added by the automation engineers. Your framework should have a simple test definition interface that testers can use to create their tests and scenarios.
The primary goal of building an automation architecture, rather than defaulting to record and playback, is to separate the process of test definition and test execution. This eliminates that need for a “super tester” that has both subject matter expertise AND automation expertise. If done correctly, the SME’s can define their test scenarios using a simplified instruction set (stored in an Excel spreadsheet or some other external data source). The test engine, built and maintained by a core group of automation experts, executes the scenarios and reports the results In this example, the test engine consists of reusable scripts and functions that “know how” to perform a common set of test actions such as logging in, navigation, data input, verification, etc. The definition of the tests which contains which actions to perform and what specific data should be used is built in an external data source.
Consistent means that if I have a “Create Order” business process – each time I execute that process as part of my testing I go through pretty much the same steps each time. Finite means I know how many and what the BP’s are before I start. End-to-end means I’m concerned with the result of the BP, not incremental, lower-level test cases like screen navigation, user interface state, etc.
Consistent means that if I have a “Create Order” business process – each time I execute that process as part of my testing I go through pretty much the same steps each time. Finite means I know how many and what the BP’s are before I start. End-to-end means I’m concerned with the result of the BP, not incremental, lower-level test cases like screen navigation, user interface state, etc.
Consistent means that if I have a “Create Order” business process – each time I execute that process as part of my testing I go through pretty much the same steps each time. Finite means I know how many and what the BP’s are before I start. End-to-end means I’m concerned with the result of the BP, not incremental, lower-level test cases like screen navigation, user interface state, etc.
Many times we see test scripts that were created directly from requirements – any analysis of the test conditions (i.e. things to be tested) is often treated as work product and not kept. However, because we have different types of requirements (e.g. functional, security, user interface, etc.) we have different types of test cases. These varying types of test cases are often jammed together in manual test scripts because we want to spend as little time as possible performing manual tests. <CLICK> Why go through a particular set of screens multiple times, when you can jam all of your testing related to a particular process into just one pass? There’s nothing wrong with that – it’s just efficient manual testing. However, it’s not structured very well for automation. Why? Because as we’ll discuss in more detail later, successful automation requires some type of reusable, data-driven architecture. This manual test script is not reusable for other testing purposes. <CLICK> However, if we extract the test cases from the test scripts and look at them as a whole, we can start to visualize what our automation approach might be. Why, because we start to see types of test cases that are similar and repetitive – a good indication of an automation candidate. Let’s look at this a little deeper on the next slide.
Many times we see test scripts that were created directly from requirements – any analysis of the test conditions (i.e. things to be tested) is often treated as work product and not kept. However, because we have different types of requirements (e.g. functional, security, user interface, etc.) we have different types of test cases. These varying types of test cases are often jammed together in manual test scripts because we want to spend as little time as possible performing manual tests. <CLICK> Why go through a particular set of screens multiple times, when you can jam all of your testing related to a particular process into just one pass? There’s nothing wrong with that – it’s just efficient manual testing. However, it’s not structured very well for automation. Why? Because as we’ll discuss in more detail later, successful automation requires some type of reusable, data-driven architecture. This manual test script is not reusable for other testing purposes. <CLICK> However, if we extract the test cases from the test scripts and look at them as a whole, we can start to visualize what our automation approach might be. Why, because we start to see types of test cases that are similar and repetitive – a good indication of an automation candidate. Let’s look at this a little deeper on the next slide.
Many times we see test scripts that were created directly from requirements – any analysis of the test conditions (i.e. things to be tested) is often treated as work product and not kept. However, because we have different types of requirements (e.g. functional, security, user interface, etc.) we have different types of test cases. These varying types of test cases are often jammed together in manual test scripts because we want to spend as little time as possible performing manual tests. <CLICK> Why go through a particular set of screens multiple times, when you can jam all of your testing related to a particular process into just one pass? There’s nothing wrong with that – it’s just efficient manual testing. However, it’s not structured very well for automation. Why? Because as we’ll discuss in more detail later, successful automation requires some type of reusable, data-driven architecture. This manual test script is not reusable for other testing purposes. <CLICK> However, if we extract the test cases from the test scripts and look at them as a whole, we can start to visualize what our automation approach might be. Why, because we start to see types of test cases that are similar and repetitive – a good indication of an automation candidate. Let’s look at this a little deeper on the next slide.
To highlight the importance of the capturing and tracking test cases separately I want to take a look at a typical manual test script. As I’ve highlighted, we often see that manual test scripts often contain many types of test cases. CLICK If we look at the manual steps we see that we have some business process level test case, user interface, input validation, etc. This doesn’t seem too bad from automation perspective, but we’re not looking at the big picture. CLICK If we take into consideration that we have dozens, hundreds, or even thousands of manual scripts – we quickly see that we’re going to have mess in terms of understanding how to approach automation. So how do we begin to untangle this mess? As we saw in the last slide, we need to extract, or “distill” our test cases into similar levels and types. CLICK Once we have grouped our test cases into similar levels and types, we can start to envision what automation approaches (i.e. architectures) we might want to use. CLICK As the slide indicates, you likely won’t use the same automation approach for all types of testing – most successful automation functions that we have seen have specific architectures adapted to specific testing needs. We’ll discuss automation architectures in a little more detail later in the presentation
Actual implementation specifics will depend on the capabilities of your test tool