SlideShare una empresa de Scribd logo
1 de 86
Descargar para leer sin conexión
MB
Full-day Tutorials
5/5/2014 8:30:00 AM
The Challenges of BIG
Testing: Automation,
Virtualization, Outsourcing,
and More
Presented by:
Hans Buwalda
LogiGear
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Hans Buwalda
LogiGear
Hans Buwalda has been working with information technology since his high school years.
In his thirty year career, Hans has gained experience as a developer, manager, and
principal consultant for companies and organizations worldwide. He was a pioneer of the
keyword approach to testing and automation, now widely used throughout the industry. His
approaches to testing, like Action Based Testing and Soap Opera Testing, have helped a
variety of customers achieve scalable and maintainable solutions for large and complex
testing challenges. Hans is a frequent speaker at STAR conferences and is lead author
of Integrated Test Design and Automation: Using the Testframe Method.
4/10/2014
1
© 2014 LogiGear Corporation. All Rights Reserved
Hans Buwalda
LogiGear
Automation,
Virtualization,
Outsourcing, and More
STAREAST 2014, Tutorial MB
Orlando, Monday May 5, 2014
The Challenges of
BIG Testing
© 2014 LogiGear Corporation. All Rights Reserved
Introduction
− industries
− roles in testing
4/10/2014
2
© 2014 LogiGear Corporation. All Rights Reserved
About LogiGear
Software testing company, around since 1994
Testing and test automation expertise, services and
tooling
− consultancy, training
− test development and automation services
− "test integrated" development services
Aims to be thought leader, in particular for large and
complex test projects
Products:
− TestArchitect™, TestArchitect for Visual Studio™
− integrating test development with test management and automation
− based on modularized keyword-driven testing
www.logigear.com
www.testarchitect.com
© 2014 LogiGear Corporation. All Rights Reserved
About Hans
Dutch guy, living and working in California since 2001, as
CTO of LogiGear
Background in math, computer science, management
Original career in management consultancy, since 1994
focusing on testing and test automation
− keywords, agile testing, big testing, . . .
www.happytester.com
hans @ logigear.com
4/10/2014
3
© 2014 LogiGear Corporation. All Rights Reserved
Topics for today
Automation
Designing and organizing tests
Executing tests
Team, organization and process
Off-shoring, globalization
© 2014 LogiGear Corporation. All Rights Reserved
What is "BIG"
Big efforts in development, automation, execution and/or follow up
It takes a long time and/or large capacity to run tests (lot of tests, lot
of versions, lot of configurations, ...)
Scalability, short term and long term
Complexity, functional, technical, scale
Number and diversity of players and stakeholders
Various definitions of "big" possible... and relevant...
− "10 machines" or "10 acres"
− "1000 tests" or "1000 weeks of testing"
Big today means: big for you
− not trivial, you need to think about it
"Windows 8 has undergone more than
1,240,000,000 hours of testing"
Steven Sinofsky, Microsoft, 2012
4/10/2014
4
© 2014 LogiGear Corporation. All Rights Reserved
Existential Questions
Why test?
Why not test?
Why automate tests?
Why not automate tests?
© 2014 LogiGear Corporation. All Rights Reserved
Why test?
People expect us to do
Somebody wants us to
Increases certainty and control
− Showing absence of problems
Finds faults, saving time, money, damage
− Showing presence of problems
4/10/2014
5
© 2014 LogiGear Corporation. All Rights Reserved
Why not test?
It costs time and money
You might find problems . . .
We forgot to plan for it
We need the resources for development
It is difficult
It's hard to manage
© 2014 LogiGear Corporation. All Rights Reserved
Why Automate Tests?
It is more fun
Can save time and money
− potentially improving time-to-market
Can capture key application knowledge in a re-
usable way
Consolidates a structured way of working
− when established as integral part of system
development process
Can speeds up development life cycles
Execution typically is more reliable
− a robot is not subjective
4/10/2014
6
© 2014 LogiGear Corporation. All Rights Reserved
The Power of Robot Perception
FINISHED FILES ARE THE RE
SULT OF YEARS OF SCIENTI
FIC STUDY COMBINED WITH
THE EXPERIENCE OF YEARS...
© 2014 LogiGear Corporation. All Rights Reserved
Why not Automate?
Can rule out the human elements
− promotes "mechanical" testing
− might not find "unexpected" problems
More sensitive to good practices
− pitfalls are plentiful
Creates more software to manage
Needs/uses technical expertise in the test team
Tends to dominate the testing process
− at the cost of good test development
maintenance can crush automation...
4/10/2014
7
© 2014 LogiGear Corporation. All Rights Reserved
Olny srmat poelpe can raed tihs.
I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd
waht I was rdanieg. The phaonmneal pweor of the
hmuan mnid, aoccdrnig to a rscheearch at
Cmabrigde Uinervtisy, it deosn't mttaer in waht
oredr the ltteers in a wrod are, the olny iprmoatnt
tihng is taht the frist and lsat ltteer be in the rghit
pclae. The rset can be a taotl mses and you can
sitll raed it wouthit a porbelm. Tihs is bcuseae the
huamn mnid deos not raed ervey lteter by istlef,
but the wrod as a wlohe.
The Power of Human Perception
© 2014 LogiGear Corporation. All Rights Reserved
The Power of Human Perception
Notice at an event:
"Those who have children and don't
know it, there is a nursery
downstairs."
In a New York restaurant:
"Customers who consider our
waiters uncivil ought to see the
manager."
In a bulletin:
"The eighth-graders will be
presenting Shakespeare's Hamlet in
the basement Friday at 7 PM. You
are all invited to attend this drama."
In the offices of a loan company:
"Ask about our plans for owning your
home."
In the window of a store:
"Why go elsewhere and be cheated
when you can come here?"
4/10/2014
8
© 2014 LogiGear Corporation. All Rights Reserved
About tests in big projects
Regular tests may be activities, complex tests are
products. In fact any test that you want to run
more than once is a product
Every test that is written down with sufficient
detail should be automated
Automation
− No longer an option in most situations
− Also a key prerequisite of most agile approaches
How tests are written and automated can make
or break large scale testing
© 2014 LogiGear Corporation. All Rights Reserved
Keywords (actions) to help scalability
The test developer creates tests using keywords
I call them "actions", with each action having a keyword
and arguments
The automation task focuses on automating the keywords
Each keyword is automated only once
3 actions, each
with an action
keyword and
arguments
read from top
to bottom
prod id name stock
new product hamm hammer 5
prod id quantity
add to stock hamm 20
prod id expected
check stock hamm 25
4/10/2014
9
© 2014 LogiGear Corporation. All Rights Reserved
Potential benefits of keywords
More productive: more tests, better tests
− more breadth
− more depth
Easier to read and understand
− no program code, tests can be self-documenting
− facilitates involvement of non-technical people, like domain experts
Fast, results can be quickly available
− the design directly drives the automation
More targeted efforts for the automation engineers
− less repetition, test design helps in creating a structured solution (factoring)
− can focus on critical and complex automation challenges more easily
Automation can be made more stable and maintainable
− limited and manageable impact of changes in the system under test
A significant portion of the tests can typically be created early in a
system life cycle
− dealing with execution details later
. . .
© 2014 LogiGear Corporation. All Rights Reserved
Case: International Financial Project
One of the largest projects to date with action words
Over 10000 windows, meant for use in 85 countries
Long development cycle (400 pp, 4 years and counting)
Maintenance very hard
Testing major bottleneck
Much investment in automation techniques were needed
to become successful
Also a lot of attention for team and work environment
helped the success
Team of 35 test developers, 2 automation engineers
4/10/2014
10
© 2014 LogiGear Corporation. All Rights Reserved
Risks of keywords
Keywords are often seen as silver bullet
− often treated as a technical "trick", complications are underestimated
The method needs understanding and experience to be
successful
− pitfalls are many, and can have a negative effect on the outcome
− some of the worst automation projects I've seen were with keywords
Testers might get pushed into half-baked automation role
− risk: you loose a good tester and gain a poor programmer
− focus may shift from good (lean and mean) testing to "getting
automation to work"
− the actual automation challenges are better left to a the experienced
automation professionals
Lack of method and structure can risk manageability
− maintainability may not as good as hoped
− tests may turn out shallow and redundant
© 2014 LogiGear Corporation. All Rights Reserved
Case: A Financial Project
Large project, with a large automation team
Early adopter of the keywords idea
However, the keywords were applied with little
method or direction
Management did not see a need for help/coaching
Result: many hard to maintain tests
− crushing complexity
− redundant tests and actions
− the method was abandoned
4/10/2014
11
© 2014 LogiGear Corporation. All Rights Reserved
Keywords need a method
By themselves keywords don't provide much scalability
− they can even backfire and make automation more cumbersome
− a method can help tell you which keywords to use when, and how to
organize the process
Today we'll look at Action Based Testing (ABT)
− addresses test management, test development and automation
− large focus on test design as the main driver for automation success
Central deliveries in ABT are the "Test Modules"
− developed in spreadsheets
− each test module contains "test objectives" and "test cases"
− each test module is a separate (mini) project, each test module can
involve different stake holders
© 2014 LogiGear Corporation. All Rights Reserved
Example of an ABT test module
Consists of an (1) initial part, (2) test cases and (3) a final part
Focus is on readability, and a clear scope
Navigation details are avoided, unless they're meant to be tested
TEST MODULE Car Rental Payments
user
start system jdoe
TEST CASE TC 01 Rent a car
first name last name car weeks
enter rental Mary Renter Ford Escape 2
last name amount
check payment Renter 140.42
FINAL
close application
4/10/2014
12
© 2014 LogiGear Corporation. All Rights Reserved
Example of a "low level" test module
In "low level" tests interaction details are not hidden, since they are
the target of the test
The right level of abstraction depends on the scope of the test, and is
an outcome of your test design process
TEST MODULE Screen Flow
user
start system john
TEST CASE TC 01 "New Order" button
first name control
click main new order
window
check window exists new order
FINAL
close application
© 2014 LogiGear Corporation. All Rights Reserved
Re-use actions to make new actions
In the below example we use another sheet, but if you code actions,
you could do something similar
Often low level tests are re-used into these action definitions
ACTION DEFINITION check payment
user default value
argument last name Jones
argument amount
window control value
enter main last name # last name
window control
click main view balance
window control expected
check main balance # amount
4/10/2014
13
© 2014 LogiGear Corporation. All Rights Reserved
High Level Test Design - Test Development Plan
Objectives
Test Module 1
Test Cases
Test Module 2 Test Module N
Actions
. . .
AUTOMATION
Objectives Objectives
interaction test business test
Overview Action Based Testing
define the "chapters"
create the "chapters"
create the "words"
make the words work
Test Cases Test Cases
window control value
enter log in user name jdoe
enter log in password car guy
window control property expected
check property log in ok button enabled true
user password
log in jdoe car guy
first last brand model
enter rental Mary Renter Ford Escape
last total
check bill Renter 140.42
© 2014 LogiGear Corporation. All Rights Reserved
Case: Stock Exchange
Transition from floor-based to screen-based trade
Created on basis of an existing standard package
− result: very little specifications
Consisting of four major, different, systems that need to work in real-
time
Failures and bugs are not an option:
− core of the financial system of the country, 100K revenue per second
− traders not necessarily following rules
In-depth knowledge limited to four people
− nicknamed "The Four Daltons", after characters in a French comic book series about the
wild west
− none of the four Daltons was involved in testing, testing was in a vacuum
Three months to go...
− test development (and scripted automation) had failed
− test department not cooperating well with developers and domain experts
− internal and external auditors had raised the alarm
− and... the Dutch Crown Prince was scheduled to put the system into use!!
The Four Daltons
(French comic book characters)
4/10/2014
14
© 2014 LogiGear Corporation. All Rights Reserved
Case: Stock Exchange
Test set:
− make it comprehensive
− make it in-depth and aggressive
− make it easy to assess and approve
Organization:
− get the right people involved (testing, automation, etc)
− use scarce resources efficiently (in particular the four Daltons)
− work with stake holders to let the process be transparent
Technical:
− use of the keyword method ("action words")
− use "test objectives" so auditors can see quickly what you're testing
− use great test design, don't mix apples and oranges
"Sign off lubrication":
− auditors signed off on the tests, not the test results
− "the test is complete", not "the system works well"
Results:
− deadline was met one day before final date
− the automated tests were the only ones used for acceptance
− no functional errors found afterwards
© 2014 LogiGear Corporation. All Rights Reserved
Question
What is wrong with the
following pictures?
4/10/2014
15
© 2014 LogiGear Corporation. All Rights Reserved
No Y2K Problems in Auckland Airport??
© 2014 LogiGear Corporation. All Rights Reserved
Anything wrong with this instruction ?
You should change your battery or switch to outlet
power immediately to keep from losing your work.
4/10/2014
16
© 2014 LogiGear Corporation. All Rights Reserved
Why Better Test Design?
Many tests are often mechanical now
− blindly follows specs or reqs
− which often suits ok, but lacks aggression
− no combinations, no unexpected situations
− "methodical" does not have to mean "mechanical"
For a higher “ambition level” you need
− understanding of the system under test, and the business under test
− analytical understanding of what could go wrong
− creativity, and the commitment to use it
Poor test development results in
− cumbersome automation due to lack of focus
− tedious retest cycles, loosing the agile advantage
Are you suffering
from lame tests
too?
© 2014 LogiGear Corporation. All Rights Reserved
Effects of Better Test Design
Quality and manageability of test
− many tests are often quite "mechanical" now
− one to one related to specifications, user stories or requirements,
which often is ok, but lacks aggression
− no combinations, no unexpected situations, lame and boring
− such tests have a hard time finding (interesting) bugs
Better automation !
− when unneeded details are left out of tests, they don't have to be
maintained
− avoiding "over checking": creating checks that are not in the scope of
a test, but may fail after system changes
− limit the impact of system changes on tests, making such impact
more manageable
I have become to believe that successful automation is less of a
technical challenge as it is a test design challenge.
4/10/2014
17
© 2014 LogiGear Corporation. All Rights Reserved
Case for organizing tests in BIG projects
Can help keep the volume down
Isolate the complexities
Efficient and re-usable automation
Deal with changing requirements
For example: much of tested subject matter is not
system specific, but business specific
− a mortgage is a mortgage
© 2014 LogiGear Corporation. All Rights Reserved
The Three “Holy Grails” of Test Design
Metaphor to depict three main steps in test design
Using "grail" to illustrate that there is no one perfect
solution, but that it matters to pay attention (to search)
About quality of tests, but most of all about scalability and
maintainability in BIG projects
Right approach for each test module
Proper level of detail in the test specification
Organization of tests into test modules
4/10/2014
18
© 2014 LogiGear Corporation. All Rights Reserved
What's the trick...
© 2014 LogiGear Corporation. All Rights Reserved
What's the trick...
Have or acquire facilities to store and organize
your content
Select your stuff
Decide where to put what
− assign and label the shelves
Put it there
If the organization is not sufficient anymore, add
to it or change it
4/10/2014
19
© 2014 LogiGear Corporation. All Rights Reserved
Properties of a good Breakdown
Test modules are well differentiated and clear in
scope
Reflects the level of tests
Balanced in size and amount
Modules are mutually independent
Fit the priorities and planning of the project
© 2014 LogiGear Corporation. All Rights Reserved
Breakdown Criteria
Straightforward Criteria
− Functionality (customers, finances, management information, UI, ...)
− Architecture of the system under test (client, server, protocol, sub
systems, components, modules, ...)
− Kind of test (navigation flow, negative tests, response time, ...)
Additional Criteria
− Stakeholders (like "Accounting", "Compliance", "HR", ...)
− Complexity of the test (put complex tests in separate modules)
− Technical aspects of execution (special hardware, multi-station, ...)
− Overall project planning (availability of information, timelines, sprints, ...)
− Risks involved (extra test modules for high risk areas)
− Ambition level (smoke test, regression, aggressive, )
4/10/2014
20
© 2014 LogiGear Corporation. All Rights Reserved
What is probably not a good design
Navigational and functional tests are mixed
− for example "over checking": a test of a premium calculation also
checks the existence of a window
You have to change all of them for every new release of
the system under test
All test modules have a similar design
Test modules are dependent on each other
You can’t start developing any test modules early in the
life cycle
© 2014 LogiGear Corporation. All Rights Reserved
Symptoms
Tediousness in the test and test automation
process
No sense of control
Complaining people
Unnecessary high test maintenance
− changes in the system under test impact many tests
− hard to understand which tests need to be modified
Difficulties in running any test
− teams start "debugging" tests
4/10/2014
21
© 2014 LogiGear Corporation. All Rights Reserved
Breakdown examples
CRUD tests (Create, Read, Update, Delete) for all entity types in the app
− like "order", "customer", "well", etc
− for all: various types and situations
Forms, value entry
− does each form work (try to test form by form, not entity by entity)
− mandatory and optional fields, valid and invalid values, etc
− UI elements and their properties and contents
− function keys, tab keys, special keys, etc
Screen and transaction flows
− like cancel an order, menu navigation, use a browser back and forward buttons, etc
− is the data in the database correct after each flow
Business transactions, business rules
− identify situations that the tests need to try
Function tests, do individual functions work
− can I count orders, can I calculate a discount, etc
End-to-end tests
− like enter sale order, then check inventory and accounting
Tests with specific automation needs
− like multi station tests
Tests of non-UI functions
High ambition tests (aggressive tests)
− can I break the system under test
© 2014 LogiGear Corporation. All Rights Reserved
Identifying the modules
Step 1: top down: establish main structure:
analyze what the business is and what the system does?
how is it technically organized?
do other “primary criteria” apply?
use the list in the "breakdown examples" slide as a starting point
− see it as "low hanging fruit": items that tend to apply well in many projects
also visit the “secondary criteria”
− not always applicable, but can help to refine the design further
Step 2: bottom up: refine, complete:
study individual functionalities and checks (like from exist test cases)
and identify test modules for them if needed
identify and discuss any additional criteria and needed testing situations
review and discuss the resulting list(s) of test modules
create some early drafts of test modules and adjust the list if needed
Repeat steps 1 and 2 if needed.
4/10/2014
22
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Status
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass
Formula Editor buttons, operations, undo 3 pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed
Model Store in Repository presence, re-run, delete 1 pass
Repository UI example: selecting an item shows its description 2 errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass
Graphics graphical representation of various data types and data sets 1 pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass
Administration users, projects, authorization 1 pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass
Modeler UI various controls, panels, tabs 2 pass
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Status
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass
Formula Editor buttons, operations, undo 3 pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed
Model Store in Repository presence, re-run, delete 1 pass
Repository UI example: selecting an item shows its description 2 errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass
Graphics graphical representation of various data types and data sets 1 pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass
Administration users, projects, authorization 1 pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass
Modeler UI various controls, panels, tabs 2 pass
4/10/2014
23
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Status
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass
Formula Editor buttons, operations, undo 3 pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed
Model Store in Repository presence, re-run, delete 1 pass
Repository UI example: selecting an item shows its description 2 errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass
Graphics graphical representation of various data types and data sets 1 pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass
Administration users, projects, authorization 1 pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass
Modeler UI various controls, panels, tabs 2 pass
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Status
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass
Formula Editor buttons, operations, undo 3 pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed
Model Store in Repository presence, re-run, delete 1 pass
Repository UI example: selecting an item shows its description 2 errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass
Graphics graphical representation of various data types and data sets 1 pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass
Administration users, projects, authorization 1 pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass
Modeler UI various controls, panels, tabs 2 pass
4/10/2014
24
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Status
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass
Formula Editor buttons, operations, undo 3 pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed
Model Store in Repository presence, re-run, delete 1 pass
Repository UI example: selecting an item shows its description 2 errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass
Graphics graphical representation of various data types and data sets 1 pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass
Administration users, projects, authorization 1 pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass
Modeler UI various controls, panels, tabs 2 pass
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Test Module Scope Prio Build 1 Build 2 Build 3
Model Life Cycles
Create, store, delete Models (=formula + data), as part of SYS
sessions
1 pass pass pass
Result Life Cycles Create, store outputs. See them in the process store. 1 pass pass pass
Formula Life Cycles Create, edit, manage, remove formulas 2 pass pass pass
Formula Editor buttons, operations, undo 3 pass pass pass
Repository
display of the Modeler repository, presence of user formulas,
drag and drop usage. Effect of changing repository folder
(environment variable)
1 failed failed failed
Model Store in Repository presence, re-run, delete 1 pass pass pass
Repository UI example: selecting an item shows its description 2 errors errors errors
Formula Evaluation
Correctness of results, valid/invalid arguments, boundary
analyses, special arguments
1 pass pass pass
Built-in Formulas
Presence, correctness, valid/invalid arguments, boundaries,
special arguments, equivalence classes
1 pass pass pass
Data Table Association
Associate tabels view, change and remove associations, data
applicability, for existing and defined formulas
2 pass pass pass
Quick Access buttons
Life cycle of Quick Access buttons, correctnes for the built-in
ones
3 dev dev dev
Formula arguments
presence, argument types, argument entry, parameters,
defaults
2 pass pass pass
arguments for Built-in
Formulas
arguments, argument types and defaults for each pre-defined
formula
2 failed failed failed
Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass pass pass
Model Execution
Model times, start, stop (cancel), restart ("chunks",
"timeboxes", ... needs more information)
3 pass pass pass
Graphics graphical representation of various data types and data sets 1 pass pass pass
Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass pass pass
Administration users, projects, authorization 1 pass pass pass
Model results in central
database
storing, removing, using, correctness, … (there are some other
applications, mostly legacy, that can do the same Models to
compare)
1 pass pass pass
Modeler UI various controls, panels, tabs 2 pass pass pass
4/10/2014
25
© 2014 LogiGear Corporation. All Rights Reserved
Questions for Test Design
Does your organization make
something like a high level test
design?
If yes, how do you document it?
© 2014 LogiGear Corporation. All Rights Reserved
Case Study
Large IT provider
New version of one of their major web-sites
Test scope was user acceptance test (functional
acceptance)
− the users were the “business owners”
Development was off-shore
4/10/2014
26
© 2014 LogiGear Corporation. All Rights Reserved
Case Study
Test development was done separate from
automation
− time-line for test development: May – Oct
− time-line for automation (roughly): Jan – Feb
All tests were reviewed and approved by the
business owners
− acceptance was finished by the end of the test
development cycle
© 2014 LogiGear Corporation. All Rights Reserved
Example of a Test Development Plan
Nr Module Business Owner Date to BO
1 Portal Navigation, Audience Robyn Peterson 05 / 23
2 Portal Navigation, Search Ted Jones 05 / 27
3 Membership, registration Steve Shao 06 / 03
4 Portal Navigation, Category Ted Jones 06 / 08
5 Portal Navigation, Topic and Expert Ted Jones 06 / 13
6 Access Control Mike Soderfeldt 06 / 17
7 Portal Navigation, Task Ted Jones 06 / 22
8 Contact DSPP Ted Jones 06 / 27
9 Portal search Mike Soderfeldt 07 / 01
10 Membership, review and update Steve Shao 07 / 05
11 Program contact assignment Alan Lai 07 / 11
12 Company, registration Steve Shao 07 / 14
13 Catalog, view and query Robyn Peterson 07 / 19
14 Site map Ted Jones 07 / 25
15 Membership, affiliation Steve Shao 07 / 28
16 Learn about DSPP Ted Jones 08 / 01
17 Products and services Steve Shao, Robyn Peterson 08 / 08
18 What's new Ted Jones 08 / 11
19 Company, life cycle Steve Shao, Alan Lai 08 / 17
20 Specialized programs Ted Jones, Steve Shao 08 / 22
21 Customer surveys Ted Jones 08 / 29
22 Software downloads Mike Soderfeldt 09 / 01
23 Newsletters Ted Jones 09 / 06
24 Internationalization and localization Ted Jones 09 / 13
25 Membership, life cycles Steve Shao 09 / 19
26 Collaboration, forums Ted Jones 09 / 23
27 Collaboration, blogs Mike Soderfeldt 09 / 28
28 Collaboration, mailing lists Ted Jones 10 / 03
4/10/2014
27
© 2014 LogiGear Corporation. All Rights Reserved
Review Process with Stake Holders
Test Team sends draft
Module to Stake Holder
Stake Holder reviews:
- coverage
- correctness
Stake Holder returns
notes:
- additions
- corrections
Test Team receives and
processes notes
changes needed?
Stake Holder returns
notice of approval
Test Team marks the
Module as "Final"
END
no
yes
START
© 2014 LogiGear Corporation. All Rights Reserved
Case Study, Results
All tests were developed and reviewed on schedule
− many notes and questions during test development phase
The automation was 100% of the tests
− all actions were automated, thus automating all test modules
The test development took an estimated 18 person
months
− one on-shore resource, two off-shore resources
The automation took between one and two months
− focused on actions
− most time was spent in handling changes in the interface (layout of pages
etc)
4/10/2014
28
© 2014 LogiGear Corporation. All Rights Reserved
Case: The French Director
Mid size company
Struggling under high pressure
Testing of their main product, standard financial
software
Control and priority main issue
Unfamiliar business culture
Main instrument: module break down
© 2014 LogiGear Corporation. All Rights Reserved
Test Modules versus Test Cases
The test module is a bigger unit in the test design
− easier to identify
− a chapter rather than a paragraph
− easier to plan and manage, as a product (can be treated as part of product
backlog in scrum projects)
Better flow of execution
− each test case can set up for the next one
− keep test modules independent, test cases can be dependent
Test cases become creative output, rather than stifling input
− avoids having to define all test cases at once early in the process
Clear scope helps to identify cases, actions and checks
− using "test objectives" to further detail scope
− had a significant effect on maintainability
4/10/2014
29
© 2014 LogiGear Corporation. All Rights Reserved
"Thou Shall Not Debug Tests..."
Large and complex test projects can be hard to "get to
run"
If they are however, start with taking a good look again at
your test design...
Rule of thumb: don't debug tests. If tests don't run
smoothly, make sure:
− lower level tests have been successfully executed first -> UI flow in the AUT
is stable
− actions and interface definitions have been tested sufficiently with their own
test modules -> automation can be trusted
− are you test modules not too long and complex?
© 2014 LogiGear Corporation. All Rights Reserved
What about existing tests?
Compare to moving house:
− some effort can't be avoided
− be selective, edit your stuff,
• look at the future, not the past
− first decide where to put what, then put it there
− moving is an opportunity, you may not get such chance again soon
Follow the module approach
− define the modules and their scope as if from scratch
− use the existing test cases in two ways:
• verify completeness
• harvest and re-use them for tests and for actions
− avoid porting over "step by step", in particular avoid over-checking
4/10/2014
30
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Defining test runs using "test suites"
Build Acceptance Test
Smoke Test
System Test
FunctionalAcceptance Test
Integration Test
© 2014 LogiGear Corporation. All Rights Reserved
Grail 2: Approach per Test Module
Plan the test module:
− when to develop: is enough specification available
− when to execute: make sure the functionality at action level is well-
tested and working already
Process:
− do an intake: understand what is needed and devise an approach
− analyze of requirements
− formulate "test objectives"
− create "test cases"
Identify stakeholders and their involvement:
− users, subject matter experts
− developers
− auditors
Choose testing techniques if applicable:
− boundary analysis, decision tables, etc
4/10/2014
31
© 2014 LogiGear Corporation. All Rights Reserved
Eye on the ball, Scope
Always know the scope of the test module
The scope should be unambiguous
The scope determines many things:
− what the test objectives are
− which test cases to expect
− what level of actions to use
− what the checks are about and which events should
generate a warning or error (if a “lower” functionality is
wrong)
© 2014 LogiGear Corporation. All Rights Reserved
What I have seen not work
Lots of detailed steps when those steps are not the focus
of the test
All actions high level, or all actions low level
"Over-Checking": having checks that don't fit the scope of
the test
Over-use of data externalization (data driven) with values
coming from files or data sets without a clear testing
reason
Combinatorial explosions: test all ... for all ... in all ...
Many tests for forms and dialogs, little tests for business
processes
4/10/2014
32
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
An example test for the Modeler
model name arguments formula
create model vegas winner x 10*x
argument value
set argument x some money
model name expected
run model vegas winner a lot more money
 2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.
Too detailed?
Step Name Description Expected
step 16 Click the new formula button to start a new
calculation.
The current formula is cleared. If it had not
been save a message will show
step 17 Enter "vegas winner" in the name field The title will show "vegas winner"
step 18 Open the formula editor by clicking the '+'
button for the panel "formula editor"
The formula editor will show with an empty
formula (only comment lines)
step 19 Add some lines and enter "10*x;" The status bard will show "valid formula".
There is a "*" marker in the title
step 20 Click the Save formula button The formula is saved, the "*" will disappear
from the title
step 21 Open the panel with the arguments by
clicking the '+' button
There two lines, for 'x' and 'y'
step 22 Click on the value type cell and select
"currency"
A button to select a currency appears, with
default USD
step 23 Click on the specify argument values link The argument specification dialog is shown
4/10/2014
33
© 2014 LogiGear Corporation. All Rights Reserved
Detail out the scope with test objectives
...
TO-3.51 The exit date must be after the entry date
...
test objective TO-3.51
name entry date exit date
enter employment Bill Goodfellow 2002-10-02 2002-10-01
check error message The exit date must be after the entry date.
© 2014 LogiGear Corporation. All Rights Reserved
Examples of Testing Techniques
Equivalence class partitioning
− any age between 18 and 65
Boundary condition analysis
− try 17, 18, 19 and 64, 65, 66
Error guessing
− try Cécile Schäfer to test sorting of
a name list
Exploratory
− "Exploratory testing is
simultaneous learning, test design,
and test execution", James Bach,
www.satisfice.com
Error seeding
− deliberately injecting faults in a test
version of the system, to see if the
tests catch them
− handle with care, don't let the bugs
get into the production version
Decision tables
− define possible situations and the
expected responses of the system
under test
State transition diagrams
− identify "states" of the system, and
have your tests go through each
transition between states at least
once
Jungle Testing
− focus on unexpected situations,
like hacking attacks
Soap Opera Testing
− describe typical situations and
scenarios in the style of episodes
of a soap opera, with fixed
characters
− high density of events,
exaggerated
− make sure the system under test
can still handle these
4/10/2014
34
© 2014 LogiGear Corporation. All Rights Reserved
"Jungle Testing"
Expect the unexpected
− unexpected requests
− unexpected situations (often data oriented)
− deliberate attacks
− how does a generic design respond to a specific unexpected event?
Difference in thinking
− coding bug: implementation is different from what was intended/specified
− jungle bug: system does not respond well to an unexpected situation
To address
− study the matter (common hack attacks, ...)
− make a risk analysis
− make time to discuss about it (analysis, brainstorm)
− involve people who can know
− use "exploratory testing" (see James Bach's work on this)
− use an agile approach for test development
− consider randomized testing, like "monkey" testing
New York. The city of a million stories. Half of them are
true, the other half just haven't happened yet -- Dr Who
© 2014 LogiGear Corporation. All Rights Reserved
Soap Opera Testing
Informal scenario technique to invite subject-matter
experiences into the tests, and efficiently address multiple
objectives
Using a recurring theme, with “episodes”
About “real life”
But condensed
And more extreme
Typically created with a high involvement of end-users
and/or subject-matter experts
It can help create a lot of tests quickly, and in an agile
way
4/10/2014
35
© 2014 LogiGear Corporation. All Rights Reserved
Lisa Crispin: Disorder Depot . . .
There are 20 preorders for George W. Bush action
figures in "Enterprise", the ERP system, awaiting the
receipt of the items in the warehouse.
Finally, the great day arrives, and Jane at the warehouse
receives 100 of the action figures as available inventory
against the purchase order. She updates the item record
in Enterprise to show it is no longer a preorder.
Some time passes, during which the Enterprise
background workflow to release preorders runs. The 20
orders are pick-released and sent down to the warehouse.
Source: Hans Buwalda, Soap Opera Testing (article), Better Software Magazine, February 2005
© 2014 LogiGear Corporation. All Rights Reserved
Lisa Crispin: Disorder Depot . . .
Then Joe loses control of his forklift and accidentally
drives it into the shelf containing the Bush action figures.
All appear to be shredded to bits. Jane, horrified,
removes all 100 items from available inventory with a
miscellaneous issue. Meanwhile, more orders for this very
popular item have come in to Enterprise.
Sorting through the rubble, Jane and Joe find that 14 of
the action figures have survived intact in their boxes.
Jane adds them back into available inventory with a
miscellaneous receipt.
4/10/2014
36
© 2014 LogiGear Corporation. All Rights Reserved
Lisa Crispin: Disorder Depot . . .
This scenario tests
• Preorder process
• PO receipt process
• Miscellaneous receipt and issue
• Backorder process
• Pick-release process
• Preorder release process
• Warehouse cancels
© 2014 LogiGear Corporation. All Rights Reserved
Vary your tests?
Automated tests have a tendency to be rigid, and
predictable
Real-world situations are not necessarily
predictable
Whenever possible try to vary:
− with select other data cases that still fit the goal of tests
− with randomized behavior of the test
4/10/2014
37
© 2014 LogiGear Corporation. All Rights Reserved
Generation and randomization techniques
Model-based
− use models of the system under test to create tests
− see: Harry Robinson, www.model-based-testing.org, and Hans Buwalda, Better
Software, March 2003
Data driven testing
− apply one test scenario to multiple data elements
− either coming from a file or produce by an automation
"Monkey testing"
− use automation to generate random data or behavior
− "smart monkeys" will follow typical user behavior, most helpful in efficiency
− "dumb monkeys" are more purely random, may find more unexpected issues
− long simulations can expose bugs traditional tests won't find
Extended Random Regression
− have a large database of tests
− randomly select and run them, for a very long time
− this will expose bugs otherwise hidden
− see Cem Kaner e.a.: "High Volume Test Automation", StarEast 2004
© 2014 LogiGear Corporation. All Rights Reserved
Data Driven Testing
Separate test logic from the data
Possible origins for the data:
− earlier steps in the test
− data table
− randomizer, or other formula
− external sources, like a database query
Use "variables" as placeholders in the test case,
instead of hard values
Data driven is powerful, but use modestly:
− value cannot be known at test time, or changes over time
− having many data variations is meaningful for the test
4/10/2014
38
© 2014 LogiGear Corporation. All Rights Reserved
Variables and expressions with keywords
This test does not need an absolute number for the
available cars, just wants to see if a stock is updated
As a convention we denote an assignment with ">>"
The "#" indicates an expression
TEST CASE TC 02 Rent some more cars
car available
get quantity Chevvy Volt >> volts
first name last name car
rent car John Doe Chevvy Volt
rent car John Doe Chevvy Volt
car expected
check quantity Chevvy Volt # volts - 2
© 2014 LogiGear Corporation. All Rights Reserved
Data driven testing with keywords
The test lines will be repeated for each row in the data set
The values represented by "car", "first" and "last" come
from the selected row of the data set
TEST CASE TC 03 Check stocks
data set
use data set /cars
car available
get quantity # car >> quantity
first name last name car
rent car # first # last # car
car expected
check quantity # car # quantity - 1
repeat for data set
DATA SET cars
car first last
Chevvy Volt John Doe
Ford Escape Mary Kane
Chrysler 300 Jane Collins
Buick Verano Tom Anderson
BMW 750 Henry Smyth
Toyota Corolla Vivian Major
4/10/2014
39
© 2014 LogiGear Corporation. All Rights Reserved
Combinations
Input values
− determine equivalence classes of values for a variable or field
− for each class pick a value (or randomize)
Options, settings
Configurations
− operating systems, operating system versions and flavors
• Windows service packs, Linux distributions
− browsers, browser versions
− protocol stacks (IPv4, IPv6, USB, ...)
− processors
− DBMS's
Combinations of all of the above
Trying all combinations will spin out of control quickly
© 2014 LogiGear Corporation. All Rights Reserved
Pairwise versus exhaustive testing
Group values of variables in pairs (or tuples with more than 2)
Each pair (tuple) should occur in the test at least once
− maybe not in every run, but at least once before you assume "done"
− consider to go through combinations round-robin, for example pick a different
combination every time you run a build acceptance test
− in a NASA study:
• 67 percent of failures triggered by a single value
• 93 percent by two-way combinations, and
• 98 percent by three-way combinations
Example, configurations
− operating system: Windows XP,
Apple OS X, Red Hat Enterprise Linux
− browser: Internet Explorer, Firefox, Chrome
− processor: Intel, AMD
− database: MySQL, Sybase, Oracle
− 72 combinations possible, to test each pair: 10 tests
Example of tools:
− ACTS from NIST, PICT from Microsoft, AllPairs from James Bach (Perl)
− for a longer list see: www.pairwise.org
These techniques and tool are supportive only. Often priorities
between platforms and values can drive more informed selection
Source: PRACTICAL COMBINATORIAL TESTING, D. Richard Kuhn, Raghu N.
Kacker, Yu Lei, NIST Special Publication 800-142, October, 2010
4/10/2014
40
© 2014 LogiGear Corporation. All Rights Reserved
Grail 3: Specification Level, choosing actions
Scope of the test determines the specification level
As high level as appropriate, as little arguments as
possible
− Use default values for non-relevant arguments
Clear names (usually verb + noun usually works well)
− to standardize action names: standardize both the verbs and the nouns, so
"check customer" versus "verify client" (or vice versa)
− tests are not C++ code: avoid "technical habits", like mixed case and (worse)
underlines
Manage the Actions
Document the Actions
By-product of the test design
© 2014 LogiGear Corporation. All Rights Reserved
Case: American Bank
Project for a new teller system
Large, state of the art
Many system releases, many adjustments
Need for very high level of automation
Over 1 million test lines, in over 650 test modules
Initially little attention paid to "holy grails"
− UI and functional tests in the same modules
− virtually un-maintainable, came close to killing the project
− test design forced upon the team by a powerful stakeholder
who did not care much for methods...
Emergency re-organization of the test modules
− after system changes the tests would run again within a day
4/10/2014
41
© 2014 LogiGear Corporation. All Rights Reserved
Example of using actions
In this real world example the first "sequence number" for teller transactions for a given day is retrieved, using a search function
• the "#" means an expression, in this case a variable with todays business date
• the ">>" means: assign to a variable for use later on in the test
key
type key {F7}
type key 3
page tab
locate page tab Scan Criteria
text
check breadcrumb general functions > search
window control value
select search scan direction Backward
window control value
enter value search business date match # bus date
source control
click search go
window control variable
get search results sequence number >> seq num
. . .
© 2014 LogiGear Corporation. All Rights Reserved
variable
get sequence number >> seq num
Example of using actions
In this real world example the first "sequence number" for teller transactions for a given day is retrieved, using a search function
• the "#" means an expression, in this case a variable with todays business date
• the ">>" means: assign to a variable for use later on in the test
4/10/2014
42
© 2014 LogiGear Corporation. All Rights Reserved
Low-level, high-level, mid-level actions
Low-level: detailed interaction with the UI (or API)
− generic, do not show any functional or business logic
− examples: "click", "expand tree node", "select menu"
High-level: represent a business function specific to the
scope of the test
− hide the interaction
− examples: "enter customer", "rent car", "check balance"
Mid-level: auxiliary actions that represent common
sequences of low level actions
− usually to wrap a form or dialog
− greatly enhance maintainability
− example: "enter address fields"
enter customer
enter address fields
enter select set . . .. . .
© 2014 LogiGear Corporation. All Rights Reserved
Identifying controls
Identify windows and controls, and assign names to them
These names encapsulate the properties that the tool can
use to identify the windows and controls when executing the
tests
4/10/2014
43
© 2014 LogiGear Corporation. All Rights Reserved
Mapping the interface
An interface mapping (common in test tools) will map windows and
controls to names
When the interface of an application changes, you only have to update
this in one place
The interface mapping is a key step in your automation success, allocate
time to design it well
INTERFACE ENTITY library
interface entity setting title {.*Music Library}
ta name ta class label
interface element title text Title:
interface element artist text Artist:
interface element file size text File size (Kb):
ta name ta class position
interface element playing time text textbox 4
interface element file type text textbox 5
interface element bitrate text textbox 6
ta name ta class position
interface element music treeview treeview 1
© 2014 LogiGear Corporation. All Rights Reserved
Some Tips to Get Stable Automation
Make the system under test automation-friendly
− consider this a key requirement ("must have")
− screen elements have stable identifying properties
− white box access to data, statuses, conditions, etc
Use "active" timing, don't use hard coded waits
Test the automation, in particular complex actions
− before running tests with them
Keep an eye on the test design
4/10/2014
44
© 2014 LogiGear Corporation. All Rights Reserved
Look for properties a human user can't see, but a test tool can
This approach can lead to speedier and more stable automation
− interface mapping is often bottleneck, and source of maintenance problems
− with predefined identifying property values in interface map can be created without "spy" tools
− not sensitive to changes in the system under test
− not sensitive to languages and localizations
Examples:
− "id" attribute for HTML elements
− "name" field for Java controls
− "AccessibleName" or "Automation ID" properties in .Net controls (see below)
Automation-friendly design: hidden properties
© 2014 LogiGear Corporation. All Rights Reserved
Mapping the interface using hidden identifiers
Instead of positions or language dependent labels, an internal property
"automation id" has been used
The interface definition will be less dependent on modifications in the UI
of the application under test
If the information can be agreed upon with the developers, for example in
an agile team, it can be entered (or pasted) manually and early on
INTERFACE ENTITY library
interface entity setting automation id MusicLibraryWindow
ta name ta class automation id
interface element title text TitleTextBox
interface element artist text SongArtistTextBox
interface element file size text SizeTextBox
interface element playing time text TimeTextBox
interface element file type text TypeTextBox
interface element bitrate text BitrateTextBox
ta name ta class automation id
interface element music treeview MusicTreeView
4/10/2014
45
© 2014 LogiGear Corporation. All Rights Reserved
Active Timing
Passive timing
− wait a set amount of time
− in large scale testing, try to avoid passive timing altogether:
• if wait too short, test will be interrupted
• if wait too long, time is wasted
Active timing
− wait for a measurable event
− usually the wait is up to a, generous, maximum time
− common example: wait for a window or control to appear (usually the test tool will do
this for you)
Even if not obvious, find something to wait for...
Involve developers if needed
− relatively easy in an agile team, but also in traditional projects, give this priority
If using a waiting loop
− make sure to use a "sleep" function in each cycle that frees up the processor (giving the
AUT time to respond)
− wait for an end time, rather then a set amount of cycles
© 2014 LogiGear Corporation. All Rights Reserved
Things to wait for...
Wait for a last control or elements to load
− developers can help knowing which one that is
Non-UI criteria
− API function
− existence of a file
Criteria added in development specifically for this purpose, like:
− "disabling" big slow controls (like lists or trees) until they're done loading
− API functions or UI window or control properties
Use a "delta" approach:
− every wait cycle, test if there was a change; if no change, assume that the
loading time is over:
− examples of changes:
• the controls on a window
• count of items in a list
• size a file (like a log file)
4/10/2014
46
© 2014 LogiGear Corporation. All Rights Reserved
Alternatives to UI automation ("non-GUI")
A GUI (Graphical User Interface) is only one example of an interface
for interaction with a system under test
Examples
− HTTP and XML based interfaces, like REST
− application programming interfaces (API’s)
− embedded software
− protocols
− files, batches
− databases
− command line interfaces (CLI’s)
− multi-media
− mobile devices
In many cases non-GUI automation is used since there simply is not
GUI, but it can also often speed things up:
− tends to be more straightforward technically, little effort needed to build up or maintain
− once it works, it tends to work much faster and more stably than GUI automation
In BIG testing projects routinely:
− identify which non-GUI alternatives are available
− as part of test planning: identify which tests qualify for non-GUI automation
© 2014 LogiGear Corporation. All Rights Reserved
Tools that can help manage BIG projects
Application Lifecycle Management (ALM)
− abundant now, mainly on the wings of agile
− very good for control, team cooperation, and traceability
− often relate to IDE's (like Microsoft TFS and Visual Studio)
− examples: Rally, Jira, TFS
Test Managers
− as separate tools on their way out
− morphing into or replaced by ALM options
− examples: HP Quality Center, Microsoft Test Manager
Test development and automation tools
− develop and/or automate tests
• these are not the same, automation tools are not always so good for test development
− examples are HP Quick Test Pro, Borland Silk, Selenium, FitNesse, Microsoft Coded UI, and LogiGear's
TestArchitect and TestArchitect for Visual Studio (our own products)
Build tools
− succeed the traditional "make" tools
− in particular "continuous build" tools combine "make" functionality with source control systems to rebuild
components that have changed, either continuously or on set times, like nightly
− can very well also run related tests (unit and functional), and act on the results (stop build, report, etc)
− examples: Hudson, Jenkins, TFS
Bug trackers
− not only register issues, but also facilitate their follow up, with workflow features
− often also part of other tools, and tend to get absorbed now by the ALMs
− Examples: BugZilla, Mantis, Trac
4/10/2014
47
© 2014 LogiGear Corporation. All Rights Reserved
Tooling and Traceability
Reference item
(ALM item, req,
code module, ...)
Test Objective Test Case Execution Result
Test Module
Bug, issue
ALM, IDE,
Project Mgr,
Req Mgr
Test Development Tool
Automation Tool
Execution Manager
Continuous Build Tool
Lab manager
Issue Tracker
ALM
Testing
Trace back
© 2014 LogiGear Corporation. All Rights Reserved
Test Execution
Have an explicit approach for when and how to execute
which tests
Having a good high level test design will help to organize
this
Execution can be selective or integral
− unit tests are typically executed selectively, possibly automatically
based on code changes in a system like SVN or TFS
− for functional tests, decisions are needed:
• selective execution will be quicker and more efficient
• integral execution may catch more issues ("bonus bugs")
• generally extensive functional test execution will be related to releases, rather
than code check ins
− the ability to run "big testing" efficiently may determine how much can
be done
4/10/2014
48
© 2014 LogiGear Corporation. All Rights Reserved
Environments, configurations
Many factors can influence details of automation
− language, localization
− hardware
− version of the system under test
− system components, like OS or browser
Test design can reflect these
− certain test modules are more general
− others are specific, for example for a language
But for tests that do not care about the differences, the
automation just needs to "deal" with them
− shield them from the tests
© 2014 LogiGear Corporation. All Rights Reserved
Capture variations of the system under test in the actions and interface
definitions, rather than in the tests (unless relevant there).
Can be a feature in a test playback tool, or something you do with a global
variable or setting.
Variation Variation Variation
"Variations"
"Master Switch"
Actions, Interface Definitions
. . .
4/10/2014
49
© 2014 LogiGear Corporation. All Rights Reserved
Possible set up of variations
linked variation
keyworded variation
Specify for example in a dialog when you start an execution:
© 2014 LogiGear Corporation. All Rights Reserved
Test Environments
Physical
• hardware
• infrastructure
• location
• . . .
Software
• programs
• data models
• protocols
• . . .
Data
• initial data
• parameters / tables
• . . .
• costs money
• can be scarce
• configurations
• availability
• manageability
4/10/2014
50
© 2014 LogiGear Corporation. All Rights Reserved
Dealing with data
Constructed data is easier to manage
− can use automation to generate it, and to enter it in the environment
− result of test analysis and design, reflecting "interesting" situations
− however, less "surprises": real life situations which were not foreseen
Real-world data is challenging to organize
− make it a project, or task, in itself
− make absolutely sure to deal with privacy, security and legal aspects
appropriately
• study this, ask advice
• apply appropriate "scrubbing"
Consider using automation to select data for a test
− set criteria ("need a male older than 50, married, living in Denver"),
query for matching cases, and select one randomly (if possible a
different one each run)
− this approach will introduce variation and unexpectedness, making
automated tests stronger and more interesting
© 2014 LogiGear Corporation. All Rights Reserved
Unattended testing...
When a test cannot pass, it can be:
− a difference between expected and recorded values or behavior, as a result
of a check designed by the tester: this is a fail
− the automation encounters a problem, like a window or control doesn't show,
that is not part of a check: this is an error
An error can disrupt the test flow, and you may want to catch and
handle it properly:
− skip smaller or larger parts of the ongoing test
− bring the system back in a known state (typically: close any open windows,
go to the main screen)
− make sure the report clearly indicates these kind of problems, to avoid false
positives
− example "on error action" that executes a predefined action that will do
recovery
However, better is to avoid these situations
− lots of efforts needed for unattended testing should raise questions about test
design or quality of the automation
4/10/2014
51
© 2014 LogiGear Corporation. All Rights Reserved
"Known bug" problem
Not uncommon in large scale systems
− typically related to a version of the system under test
A known bug may:
− generate fails you want to ignore, also in statistics
− throw off automation
One possible approach, a "known bug" marker, so you
filter out "new" issues
If many known-bug situations occur, take another look at
your high level test design
© 2014 LogiGear Corporation. All Rights Reserved
Virtualization
Virtual machines rather than physical machines
− allow "guest" systems to operate on a "host" system
− host can be Windows, Linux, etc, but also a specialized "hypervisor"
− the hypervisor can be "hosted" or "bare metal"
Main providers:
− VMWare: ESX and ESXi
− Microsoft: Hyper-V
− Oracle/Sun: Virtual Box
− Citrix: Xen (open source)
Hardware support gets common now
− processor, chipset, i/o
− Like Intel's i7/Xeon
For most testing purposes you need virtual clients, not virtual servers
− most offerings in the market currently target virtual servers, particularly data centers
Virtual clients will become more mainstream with the coming of VM's as part
of regular operating systems
− Windows 8: Hyper-V
− Linux: KVM
4/10/2014
52
© 2014 LogiGear Corporation. All Rights Reserved
Virtualization, a testers dream...
In particular for functional testing
Much easier to define and create needed configurations
− you basically just need storage
− managing this is your next challenge
One stored configuration can be re-used over and over again
The VM can always start "fresh", in particular with
− fresh base data (either server or client)
− specified state, for example to repeat a particular problematic automation
situation
Can take "snap shots" of situations, for analysis of problems
Can use automation itself to select and start/stop suitable VM's
− for example using actions for this
− or letting an overnight or continuous build take care of this
© 2014 LogiGear Corporation. All Rights Reserved
Virtualization, bad dream?
Performance, response times, capacities
Virtual machine latency can add timing problems
− see next slide
− can be derailing in big test runs
Management of images
− images can be large, and difficult to store and move around
• there can be many, with numbers growing combinatorial style
• configuration in the VM can have an impact, like fixed/growing virtual disks
− distinguish between managed configurations and sandboxes
− define ownership, organize it
− IT may be the one giving out (running) VM's, restricting your flexibility
Managing running tests in virtual machines can take additional efforts
on top of managing the VM's themselves
− with the luxury of having VM's the number of executing machines can
increase rapidly
− one approach: let longer running tests report their progress to a central
monitoring service (various tools have features for this)
4/10/2014
53
© 2014 LogiGear Corporation. All Rights Reserved
Virtualization: "time is relative"
Consider this waiting time loop, typical for a test script:
− endTime = currentTime + maxWait
− while not endTime, wait in 100 millisecond intervals
When the physical machine overloads VM's can get slow or have
drop outs, and endTime may pass not due to AUT latency
− GetLocalTime will suffer from the latency
− GetTickCount is probably better, but known for being unreliable on VM's
Therefore tests that run smooth on physical machines, may not
consistently do so on VM's. The timing problems are not easy to
predict
Possible approaches:
− in general: be generous with maximum wait times if you can
− don't put too many virtual machines on a physical box
− consider a compensation algorithm, for example using both tick count and clock time
© 2014 LogiGear Corporation. All Rights Reserved
Virtual machines, capacity
Key to pricing is number of VM's that can run in parallel
on a physical machine
An automated test execution will typically keep a VM
more busy than human use
Factors in determining VM/PM ratio:
− memory, for guest OS, AUT, test tooling
− storage devices (physical devices, not disk images)
− processors, processor cores
− specific hardware support (becoming more common)
• processor, chipset, I/O
We started regression with 140 VMs.
Very slow performance of
Citrix VM clients.
4/10/2014
54
© 2014 LogiGear Corporation. All Rights Reserved
Building up virtualization
Pay attention to pricing:
− beefed up hardware can increase VM's/box ratio, but at a price
− software can be expensive depending on features, that you may not
need
In a large organization, virtual machines are probably available
− make sure to allocate timely (which can be long before you get there
with your sprints)
− keep in mind the capacity requirements
Logical and physical management
− which images, the wealth of possible images can quickly become
hard to see forest through the trees
− physical management of infrastructure is beyond this tutorial
Minimum requirement: snapshots/images
− freeware versions don't always carry this feature
− allow to set up: OS, environment, AUT, tooling, but also: data, states
© 2014 LogiGear Corporation. All Rights Reserved
Infrastructure
For large scale test execution this needs attention
− physical infrastructure, but also how to use it
Also consider managing infrastructure and test
execution as a separate task
− in or out of the team
− avoid slowing down development (of system, test and/or
automation)
4/10/2014
55
© 2014 LogiGear Corporation. All Rights Reserved
Remote execution, servers
Allowing execution separately from the machines the testers and
automation engineers are working on increases scalability
Large scale text execution, in particular with VM's, like to have:
− lots of processing power, lots of cores
− lots of memory
Test execution tends to care less about:
− storage
− networking
Test execution facilities tend to be a bottle neck very quickly in big
testing projects
− the teams can use whatever they can get
First step up: give team members a second machine
Second step up: use servers, users coordinate their use of them
Third step up: major infrastructures with organized allocation
© 2014 LogiGear Corporation. All Rights Reserved
Tower Servers
Smaller shops (smaller companies, departments)
Affordable, simple, first step up from clients execution
Not very scalable when the projects get larger
4/10/2014
56
© 2014 LogiGear Corporation. All Rights Reserved
Rack Servers
Well scalable
Pricing not unlike tower servers
Tend to need more mature IT expertise
© 2014 LogiGear Corporation. All Rights Reserved
Server Blades
Big league infrastructure, high density, very scalable
Tends to be pricey, use when space and energy matters
Usually out of sight for you and your team
4/10/2014
57
© 2014 LogiGear Corporation. All Rights Reserved
Cloud
Cloud can be target of testing
− normal tests, plus cloud specific tests
• functional, load, response times
− from multiple locations
− moving production through data centers
Cloud can be host of test execution
− considerations can be economical or organizational
− providers offer imaging facilities, similar to virtual machines
− make sure machines are rented and returned efficiently
Public cloud providers like EC2 offer API's, so your automation can
automatically allocate and release them
− be careful, software bugs can have costing consequences
− for example, consider having a second automation process to double-check cloud
machines have been released after a set time
Note: public cloud is not taking of as fast as expected, cloud services,
and private clouds, taking of much faster
(Xinhua Photo)
© 2014 LogiGear Corporation. All Rights Reserved
Cloud Providers
Source: Jack of All Clouds, January 2011
http://www.jackofallclouds.com/2011/01/state-of-the-cloud-january-201/
4/10/2014
58
© 2014 LogiGear Corporation. All Rights Reserved
Cloud growth
Growth of public clouds not as big as expected
Cost benefits not necessarily convincing
− low startup cost, but long ongoing cost
See also: news.cnet.com/8301-13556_3-20063361-61.html
source: IDC forecast, 2010
© 2014 LogiGear Corporation. All Rights Reserved
Cloud, example pricing, hourly rates
Source: Amazon EC2 (my interpretation, actual prices may vary)
Linux Windows
Small 0.085 0.12 1.7 GB, 1 core (32 bits)
Large 0.34 0.48 7.5 GB, 4 cores
Extra Large 0.68 0.96 15 GB, 8 cores
High memory
Extra Large 0.50 0.62 17.1 GB, 6.5 core
Double Extra Large 1.00 1.24 34.2 GB, 13 cores
Quadruple Extra Large 2.00 2.48 68.4 GB, 26 cores
High CPU
Medium 0.17 0.29 1.7 GB, 5 core (32 bits)
Extra Large 0.68 1.16 7 GB, 20 cores
4/10/2014
59
© 2014 LogiGear Corporation. All Rights Reserved
Cloud, example economy
Not counting possible use of VM's within the buy option
Also not counting: additional cost of ownership elements for owning or
cloud (like IT management, contract and usage management)
Impressions:
− cloud could fit well for bursty testing needs, which is often the case
− for full continuous, or very frequent, testing: consider buying
− hybrid models may fit many big-testing situations: own a base capacity, rent
more during peak use periods
small large extra
Windows $0.12 $0.48 $0.96
buy (estimate) $300 $650 $900
hours to break even 2,500 1,354 938
months (24 / 7) 3.4 1.8 1.3
© 2014 LogiGear Corporation. All Rights Reserved
Data centers can go down
However, disruption could have been minimized by using multiple data centers
4/10/2014
60
© 2014 LogiGear Corporation. All Rights Reserved
Data centers can go down
This time, it did involve multiple data centers . . .
© 2014 LogiGear Corporation. All Rights Reserved
Data centers can go down
Service providers can occasionally go down too
4/10/2014
61
© 2014 LogiGear Corporation. All Rights Reserved
Cloud, usage for special testing needs
Multi-region testing
− Amazon for example has several regions
• US East, Northern Virginia
• US West, Oregon, Northern California
• EU, Ireland
• Asia Pacific, Singapore, Tokyo
• South America, Sao Paulo
− be careful that data transfers between regions costs money
(0.01/GB)
Load generation
− example: "JMeter In The Cloud"
• based on the JMeter load test tool
• uses Amazon AMI's for the slave machines
• allows to distribute the AMI's in the different regions of Amazon
• see more here:
aws.amazon.com/amis/jmeter-in-the-cloud-a-cloud-based-load-testing-environment
© 2014 LogiGear Corporation. All Rights Reserved
Questions for Infrastructure
What kind of infrastructure does
your organization use for
testing?
What is the role of
virtualization, now or in the
future?
Are you using a private or a
public cloud for testing?
4/10/2014
62
© 2014 LogiGear Corporation. All Rights Reserved
Example of a cloud system under test
source: Windows Azure reference platform
© 2014 LogiGear Corporation. All Rights Reserved
Approaches
Automation does not have to be black box
− for very big systems, a separate black box automation effort may not
be efficient
− and building and keeping lab situations might be cumbersome
− some simple hooks can greatly help already
− remember... this is about automation, not test design.
Make testability part of requirements and architecture
− a key question should not just be "how do I design this", but "how do I
test this" (test design, automation)
− some cloud/web systems are changed frequently, and tested "live"
• "Testing in Production (TiP)"
− allow redirection of some or all traffic through another version of a
component or layer
Example: reverse proxy's enabling A/B testing
see also: Ken Johnston's chapter in the book of Dorothy Graham and Mark Fewster,
and his keynote at StarWest 2012
4/10/2014
63
© 2014 LogiGear Corporation. All Rights Reserved
A/B testing with a reverse proxy
Watch your test design, easy to drown in technical solutions only
B could be a real-life user or also a keyword driven test machine
A/B testing means part of traffic is routed through a different
server or component (see if it works, and/or how users react)
A similar strategy could be done at any component level
A
A
B
Reverse
Proxy
Users
Servers
A
B
newcurrent
A
B
© 2014 LogiGear Corporation. All Rights Reserved
Organization
Much of the success is gained or lost in how you organize the
process
− part of the teams
− who does test design
− who does automation
− what to outsource, what to keep in-house
Write a plan of approach for the test development and automation
− scope, assumptions, risks, planning
− methods, best practices
− tools, technologies, architecture
− stake holders, including roles and processes for input and approvals
− team
− . . .
Assemble the right resources
− testers, lead testers
− automation engineer(s)
− managers, ambassadors, ...
Test design is a skill . . .
Automation is a skill . . .
Management is a skill . . .
. . . and those skills are
different . . .
4/10/2014
64
© 2014 LogiGear Corporation. All Rights Reserved
Industrial Organization
Large scale testing can move from a "design" to a
"production" focus
− mostly applies to test execution, but also seen for test development
− this not black and white, both paradigms can occur in the same projects
− this is often more easy to outsource than development
A production organization is different a development
organization
− this is not unique for software
− different professional culture
− emphasis more on delivery and scale, "thinking big"
− discipline rather than creativity, "get stuff done"
− activities are like planning, control, logistics, information
© 2014 LogiGear Corporation. All Rights Reserved
Task in "production" (test execution)
Keeping the tests running
Allocating resources
Respond to hick-ups
Analyze and address automation issues
Address fails or other testing outcomes
− including dealing with "known bugs"
− part of a bigger team
4/10/2014
65
© 2014 LogiGear Corporation. All Rights Reserved
Stake Holders
Test Development
Test Automation
Technology/
Infrastructure
ProductionMarketing/
Sales
System
Development
End User
Departments
Quality Assurance
Management
After Sales/
Help Desk
Customers
Vendors
Government
Agencies
Publicity
EXTERNAL INTERNAL
© 2014 LogiGear Corporation. All Rights Reserved
Team roles, examples
Test development
Automation
Planning and managing the test runs
Managing environments
Managing infrastructure
Dealing with stakeholders
Analysis of results, and follow up
Reporting
4/10/2014
66
© 2014 LogiGear Corporation. All Rights Reserved
Test Development and Automation in sprints
Test Module
Definition
(optional)
Test Module Development
Interface Definition
Action Automation
Test Execution
Sprint Products
Product
Backlog
Test re-use
Automation re-use
product
owner
team
prod owner
& team
User stories
Documentation
Domain understanding
Acceptance Criteria
PO Questions
Situations
Relations
Agile life cycle
Test development
Main Level Test Modules
Interaction Test Modules
Cross over Test Modules
© 2014 LogiGear Corporation. All Rights Reserved
Test automation in sprints
Try keep the main test modules at a similar level as the user stories
and acceptance criteria
Aim for "sprint + zero", meaning: try to get test development and
automation "done" in the same sprint, not the next one
− next one means work clutters up, part of team is not working on the same sprint, work is
done double (manually and automated), ...
Make sure you can do the interface mapping by hand (using
developer provided identifications)
− can do earlier, before UI is finalized, and
− recording of actions (not tests) will go better
Also plan for additional test modules:
− low-level testing of the interaction with the system under test (like UI's)
− crossing over to other parts of the system under test
There should be agreement on the method(s) for testing and automation
The team should include the skills and experienced needed for automated
testing and the approach(es) taken for it
4/10/2014
67
© 2014 LogiGear Corporation. All Rights Reserved
Fitting in sprints
Agree on the approach:
− questions like does "done" include tests developed and automated?
− do we see testing and automation as distinguishable tasks and
skillsets
− is testability a requirement for the software
Create good starting conditions for a sprint:
− automation technology available (like hooks, calling functions, etc)
− how to deal with data and environments
− understanding of subject matter, testing, automation, etc
Make testing and automation part of the evaluations
Address tests and automation also in hardening sprints
Just like for development, use discussions with the team
and product owners to deepen understanding:
− also to help identify negative, alternate and unexpected situations
© 2014 LogiGear Corporation. All Rights Reserved
Testing as a profession
"Do thorough acceptance testing, but not only by the
customer"
− source: "Agile Software Testing in a Large-Scale Project", Israeli Air
Force
Focus on tests, not development:
− what can be consequences of situations and events
− relieve developers
Knowledge and experience with testing techniques and
principles
The challenge for the tester in the new era is to become a
more credible professional tester,
− not a pseudo programmer
− part of the team
Forcing a nontechnical tester to become a programmer
may lose a good tester and gain a poor programmer
4/10/2014
68
© 2014 LogiGear Corporation. All Rights Reserved
Automation is a profession too
Overlaps with regular system development, but not same
Less concerned with complex code structures or
algorithms
More concerned with navigating through other software
efficiently, dealing with control classes, obtaining
information, timing, etc
− if you would compare developers to "creators", automation engineers might
be likened to "adventurers"...
The automation engineering role can also be a consultant:
− for test developers: help express tests efficiently
− for system developers: how to make a system more automation friendly
− important player in innovation in the automated testing
© 2014 LogiGear Corporation. All Rights Reserved
Questions for Organization
How is your testing currently
organized (who is doing what)?
− test design
− test development
− automation
− execution
− assessment of release readiness
Do you use agile? If yes, is
there a role for a test
professional? And for an
automation professional?
4/10/2014
69
© 2014 LogiGear Corporation. All Rights Reserved
Reporting
Aim at needs:
− avoid lengthy automated reports, have bottom line numbers
− reports for stake holders
− reporting for the team
Reporting for a big testing project is about:
− test and automation progress
− production (running the tests)
− results (aimed at system under test)
Teams need (relevant) details
− what happened, reproducibility, ...
− either the tests, the automation, or the system under test
− overall situations, with an ability to "drill down" to problem areas
Management needs:
− status, expectations, issues (realistic! bad news matter, you get punished for not telling)
− bottom lines, plan versus reality confrontation
− dates, efforts, used resources, costs, run times, ...
− never allow planned numbers or dates to be "updated"
Also for reporting, test organization is a key driver
© 2014 LogiGear Corporation. All Rights Reserved
War rooms
Helpful if response times are critical, and a need for cooperation,
towards the same goal
− similar grounds as for agile scrum rooms
Set up at critical times, like before important deadlines, or during
critical releases
Can temporarily bring together multiple parties, that normally are not
co-workers
− like competitor vendors
Pay attention to physical conditions
− machines, monitors, white boards, meeting places, headsets, ...
− food, drinks, ...
The test execution cycle should match the needs of the war room
approach
− fast turnarounds
− effortless
− completeness
− selective or integral
See also: "Your Game is Live, Now What?", Jane Fraser, Electronic Arts
4/10/2014
70
© 2014 LogiGear Corporation. All Rights Reserved
Globalization....
© 2014 LogiGear Corporation. All Rights Reserved
Main Challenges
Other countries
Distances
Time differences
4/10/2014
71
© 2014 LogiGear Corporation. All Rights Reserved
Globalization
Three Challenges:
− another countries, other cultures
− geographic distances
− time differences
Seven "Patterns":
− "Solution"
− "Push Back"
− "Time Pressure"
− "Surprises"
− "Ownership"
− "Mythical Man Month"
− "Cooperation"
© 2014 LogiGear Corporation. All Rights Reserved
Challenge: Other Country
4/10/2014
72
© 2014 LogiGear Corporation. All Rights Reserved
Other Country
Differences in culture
− more on the next slide...
Different languages, and accents
Differences in education
− style, orientation and contents
− position of critical thinking, factual knowledge, practice, theory,...
− US, British, French, Asian, ...
Differences in circumstances
− demographics
− economy, infrastructure
− politics
Apprehension on-shore and off-shore about job security doesn't help in
projects
− management responsibility: understand your strategic intentions, and their consequences, and clarify
them
− be realistic in cost and benefit expectations
© 2014 LogiGear Corporation. All Rights Reserved
More on Culture...
Regional culture. There are numerous factors:
− very difficult to make general statements
• many anecdotes, stories and perceptions, some are very helpful, some have limited general
value
• not sure on impact of regional culture (see also [Al-Ani])
− numerous factors, like history, religion, political system
• e.g. valuing of: critical thinking, theory, bottom-line, relations, status, work-ethic, bad news,
saying 'no'
• entertaining guests, eating habits, alcohol, meat, humor, etc
• position of leaders, position of women managers
• mistakes can be benign and funny, but also damaging, visibly or hidden, in particular perceived
disrespect hurts
Organizational culture
− can be different from country to country, sector to sector, company to company, group to group
− I feel this to be at least as strong than regional culture (see for example [Al-Ani])
− you can have at least some control over this
Professional cultures
− for example engineers, QA, managers, ...
Some ideas to help:
− get to know each other (it helps, see for example [Gotel])
− study the matter, and make adaptations
4/10/2014
73
© 2014 LogiGear Corporation. All Rights Reserved
© 2014 LogiGear Corporation. All Rights Reserved
4/10/2014
74
© 2014 LogiGear Corporation. All Rights Reserved
© 2014 LogiGear Corporation. All Rights Reserved
4/10/2014
75
© 2014 LogiGear Corporation. All Rights Reserved
Challenge: Distance
© 2014 LogiGear Corporation. All Rights Reserved
Distance
Continuous logistical challenges
Lots of costs, and disruptions, for traveling
Distance creates distrust and conflict
− could be "normal" behavior, inherent to humans
Complex coordination can create misunderstandings
− on technical topics
− on actions, priorities, and intentions
4/10/2014
76
© 2014 LogiGear Corporation. All Rights Reserved
Challenge: Time difference
© 2014 LogiGear Corporation. All Rights Reserved
Challenge: Time difference
Additional complication for communication and
coordination
Places a major burden on both on-shore and off-shore
staff
− having to work evenings and/or early mornings
− potential for exhaustion, lack of relaxation, mistakes, irritation
Can easily lead to loss of time at critical moments
Some solutions:
− manage this actively
− constantly seek to optimize task and responsibility allocation
− build the on-shore and off-shore organizations to match
− seek ways to save meeting time, like optimal information handling
4/10/2014
77
© 2014 LogiGear Corporation. All Rights Reserved
Effect of time difference
Test Module:
“Segment Y, Default Settings”
Windows Linux
TestArchitect 5 ~ 4:16 m ~ 4:28 m
TestArchitect 6 ~ 11:00 m ~ 8:00 m
Report from the team to the US management . . .
Performance comparison TestArchitect 5 and 6
© 2014 LogiGear Corporation. All Rights Reserved
Patterns
Experiences seem to follow patterns
− at least our own experiences do
− variations are numerous, but seem to follow similar lines
− following are examples, not limitative
It can help to recognize patterns quickly, and act upon
them
Resolutions have side-effects, can introduce new issues
− for example strengthening local management means less direct
contact with the project members doing the work
Just about every pattern occurs in every direction
− from your perspective regarding "them"
− their perspective on you, or each other
− sometimes equaling, sometimes mirroring
4/10/2014
78
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "The Solution"
Typical sequence of events:
− the team finds a problem in running a test
− the team discusses it and comes up with a "solution"
− the solution: (1) creates issues, and (2) hides the real
problem
Better way:
− define as an issue
− discuss with project manager and customer
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Push Back"
US side, or customer, gives bad direction
Team doesn't like it, but feels obliged to follow orders
The result is disappointing
Team is blamed
− and will speak up even less next time
Better way:
− discuss with the principal/customer at multiple levels
• strategic about direction, operational day-to-day
− empower and encourage the team to speak up
− write plans of approach, and reports
4/10/2014
79
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Time Pressure"
Deadline must be met
− no matter what
− use over-time
− "failure is not an option"
Deadlines are sometimes real, sometimes not
− become a routine on the US side
− easy to pressure over the email
− very difficult for a non-empowered team to push back
− risk: inflation of urgency
Better way:
− good planning
− proper weighing of deadlines and priorities
− frequent reporting
− local management
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Surprises"
Good news travels better than bad news...
− should be the other way around
− the "cover up": "let's fix, no need to tell...."
− over time: needing bigger cover ups to conceal
smaller ones
− not unique for off-shoring, but more difficult to
detect and deal with
Once a surprise happens:
− you will feel frustrated, and betrayed
− fix the problems, point out the consequences of
hiding, avoid screaming and flaming
Better ways:
− agree: NO SURPRISES!!
− emphasize again and again
− train against this
− continuously manage, point out
− the magic word: transparency
SUPRISES
4/10/2014
80
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Ownership"
Shared responsibility is no responsibility
Effort-based versus result-based
On-shore players feel the off-shore team has a result responsibility
Off-shore team members feel an effort-based responsibility ("work
hard")
Better way:
− clear responsibilities and expectations
− on-shore ownership for quality control of system under test
• and therefore the tests
− off-shore ownership of producing good tests and good automation
− empower according to ownership
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Mythical Man Month"
Fred Brooks classic book, "Mythical man month":
− "Assigning more programmers to a project running behind schedule
will make it even later"
− "The bearing of a child takes nine months, no matter how many
women are assigned"
In test automation, there must be clear ownership of:
− test design (not just cranking out test cases)
− automation, this is different skill and interest
Assign at least the following roles:
− project lead, owns quality and schedule
− test lead: owns test design, coaches and coordinates the other testers
− automation: make the actions work (assuming ABT, not the test
cases)
Define distinct career paths in: testing, automation,
management
4/10/2014
81
© 2014 LogiGear Corporation. All Rights Reserved
Pattern: "Cooperation"
Communication is tedious, takes a long time
Questions, questions, questions, ...
− reverse: questions don't get answered
For at least one side in private time, extra annoying
Misunderstandings, confusion, actions not followed up
− double check apparent "crazy things" with the team before jumping to conclusions, and
actions (assume the other side is not "nuts" or "dumb"...)
Please understand: distance fosters conflicts
− we're born that way, be ready for it
Better ways:
− prioritize training, coaching, preparation and planning. Saves a lot of questions...
− write stuff down, use briefs, minutes
− define workflows and information flows
• buckets, reporting, select and use good tools
− specialize meetings
• table things for in-depth meetings
• ask to meet internally first
− be quick, no more than 30 mins
© 2014 LogiGear Corporation. All Rights Reserved
Cooperation
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More

Más contenido relacionado

La actualidad más candente

Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...
 Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk... Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...
Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...TEST Huddle
 
Test Automation in Agile: A Successful Implementation
Test Automation in Agile: A Successful ImplementationTest Automation in Agile: A Successful Implementation
Test Automation in Agile: A Successful ImplementationTechWell
 
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...NUS-ISS
 
Deploy or Adopt - Success or Failure for Technology Insertion
Deploy or Adopt - Success or Failure for Technology InsertionDeploy or Adopt - Success or Failure for Technology Insertion
Deploy or Adopt - Success or Failure for Technology InsertionEndeavor Management
 
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...Gene Kim
 
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT RoadmapNext Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT RoadmapHeadspring
 
Devops for business : Efficiency & Innovation
Devops for business : Efficiency & InnovationDevops for business : Efficiency & Innovation
Devops for business : Efficiency & InnovationSatish Bhatia
 
Universal test solutions customer testimonial 10192013-v2.3
Universal test solutions customer testimonial 10192013-v2.3Universal test solutions customer testimonial 10192013-v2.3
Universal test solutions customer testimonial 10192013-v2.3Universal Technology Solutions
 
Making Numbers Count: Metrics That Matter
Making Numbers Count: Metrics That MatterMaking Numbers Count: Metrics That Matter
Making Numbers Count: Metrics That MatterTechWell
 
Hans-Henrik Olesen - What to Automate and What not to Automate
Hans-Henrik Olesen - What to Automate and What not to AutomateHans-Henrik Olesen - What to Automate and What not to Automate
Hans-Henrik Olesen - What to Automate and What not to AutomateTEST Huddle
 
Building Operational Excellence in Petroleum Refining Training Course
Building Operational Excellence  in Petroleum Refining  Training CourseBuilding Operational Excellence  in Petroleum Refining  Training Course
Building Operational Excellence in Petroleum Refining Training CourseKarl Kolmetz
 
Ttop 5 Myths of DevOps - Karen Chua
Ttop 5 Myths of DevOps - Karen ChuaTtop 5 Myths of DevOps - Karen Chua
Ttop 5 Myths of DevOps - Karen ChuaPink Elephant
 
Lean pilots by Mariya Breyter from Dun & Bradstreet
Lean pilots by Mariya Breyter from Dun & BradstreetLean pilots by Mariya Breyter from Dun & Bradstreet
Lean pilots by Mariya Breyter from Dun & BradstreetInstitut Lean France
 
The Pothole of Automating Too Much
The Pothole of Automating Too MuchThe Pothole of Automating Too Much
The Pothole of Automating Too MuchTechWell
 
Agile 2014-cmf-pub
Agile 2014-cmf-pubAgile 2014-cmf-pub
Agile 2014-cmf-pubgamapa
 
Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based TestingTechWell
 
7 Deadly Wastes with Matt Hansen at StatStuff
7 Deadly Wastes with Matt Hansen at StatStuff7 Deadly Wastes with Matt Hansen at StatStuff
7 Deadly Wastes with Matt Hansen at StatStuffMatt Hansen
 
Why Experience Design is a Key Skill in the Digital Era
Why Experience Design is a Key Skill in the Digital EraWhy Experience Design is a Key Skill in the Digital Era
Why Experience Design is a Key Skill in the Digital EraNUS-ISS
 

La actualidad más candente (20)

Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...
 Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk... Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...
Five Digital Age Trends That Will Dramatically Impact Testing And Quality Sk...
 
Test Automation in Agile: A Successful Implementation
Test Automation in Agile: A Successful ImplementationTest Automation in Agile: A Successful Implementation
Test Automation in Agile: A Successful Implementation
 
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...
NUS-ISS Learning Day 2016 - Improve IT Project Management and other IT Proces...
 
Deploy or Adopt - Success or Failure for Technology Insertion
Deploy or Adopt - Success or Failure for Technology InsertionDeploy or Adopt - Success or Failure for Technology Insertion
Deploy or Adopt - Success or Failure for Technology Insertion
 
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...
DOES16 London - Benjamin Wootton - Lessons from 50 Enterprise DevOps Transfor...
 
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT RoadmapNext Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
Next Gen Continuous Delivery: Connecting Business Initiatives to the IT Roadmap
 
Devops for business : Efficiency & Innovation
Devops for business : Efficiency & InnovationDevops for business : Efficiency & Innovation
Devops for business : Efficiency & Innovation
 
Universal test solutions customer testimonial 10192013-v2.3
Universal test solutions customer testimonial 10192013-v2.3Universal test solutions customer testimonial 10192013-v2.3
Universal test solutions customer testimonial 10192013-v2.3
 
Making Numbers Count: Metrics That Matter
Making Numbers Count: Metrics That MatterMaking Numbers Count: Metrics That Matter
Making Numbers Count: Metrics That Matter
 
Hans-Henrik Olesen - What to Automate and What not to Automate
Hans-Henrik Olesen - What to Automate and What not to AutomateHans-Henrik Olesen - What to Automate and What not to Automate
Hans-Henrik Olesen - What to Automate and What not to Automate
 
Building Operational Excellence in Petroleum Refining Training Course
Building Operational Excellence  in Petroleum Refining  Training CourseBuilding Operational Excellence  in Petroleum Refining  Training Course
Building Operational Excellence in Petroleum Refining Training Course
 
Ttop 5 Myths of DevOps - Karen Chua
Ttop 5 Myths of DevOps - Karen ChuaTtop 5 Myths of DevOps - Karen Chua
Ttop 5 Myths of DevOps - Karen Chua
 
Lean pilots by Mariya Breyter from Dun & Bradstreet
Lean pilots by Mariya Breyter from Dun & BradstreetLean pilots by Mariya Breyter from Dun & Bradstreet
Lean pilots by Mariya Breyter from Dun & Bradstreet
 
The Pothole of Automating Too Much
The Pothole of Automating Too MuchThe Pothole of Automating Too Much
The Pothole of Automating Too Much
 
Agile 2014-cmf-pub
Agile 2014-cmf-pubAgile 2014-cmf-pub
Agile 2014-cmf-pub
 
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
[HCMC STC Jan 2015] Workshop Of Context-Driven Testing In Agile
 
Getting Started with Risk-based Testing
Getting Started with Risk-based TestingGetting Started with Risk-based Testing
Getting Started with Risk-based Testing
 
7 Deadly Wastes with Matt Hansen at StatStuff
7 Deadly Wastes with Matt Hansen at StatStuff7 Deadly Wastes with Matt Hansen at StatStuff
7 Deadly Wastes with Matt Hansen at StatStuff
 
Handouts
HandoutsHandouts
Handouts
 
Why Experience Design is a Key Skill in the Digital Era
Why Experience Design is a Key Skill in the Digital EraWhy Experience Design is a Key Skill in the Digital Era
Why Experience Design is a Key Skill in the Digital Era
 

Destacado

Test Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTest Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTechWell
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationTechWell
 
A Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingA Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingTechWell
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in PracticeTechWell
 
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...TechWell
 
Leveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleLeveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleTechWell
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in AgileTechWell
 
Testing with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTesting with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTechWell
 
Techniques for Agile Performance Testing
Techniques for Agile Performance TestingTechniques for Agile Performance Testing
Techniques for Agile Performance TestingTechWell
 
Security Testing for Testing Professionals
Security Testing for Testing ProfessionalsSecurity Testing for Testing Professionals
Security Testing for Testing ProfessionalsTechWell
 
Test Automation Patterns
Test Automation PatternsTest Automation Patterns
Test Automation PatternsTechWell
 
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...TechWell
 
Application Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachApplication Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachTechWell
 
A Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingA Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingTechWell
 
Testing Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTesting Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTechWell
 

Destacado (15)

Test Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical GuideTest Automation for Mobile Applications: A Practical Guide
Test Automation for Mobile Applications: A Practical Guide
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test Automation
 
A Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software TestingA Rapid Introduction to Rapid Software Testing
A Rapid Introduction to Rapid Software Testing
 
Test Estimation in Practice
Test Estimation in PracticeTest Estimation in Practice
Test Estimation in Practice
 
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...Principles Before Practices: Transform Your Testing by Understanding Key Conc...
Principles Before Practices: Transform Your Testing by Understanding Key Conc...
 
Leveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver ExampleLeveraging Open Source Automation: A Selenium WebDriver Example
Leveraging Open Source Automation: A Selenium WebDriver Example
 
Test Process Improvement in Agile
Test Process Improvement in AgileTest Process Improvement in Agile
Test Process Improvement in Agile
 
Testing with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing RequirementsTesting with Limited, Vague, and Missing Requirements
Testing with Limited, Vague, and Missing Requirements
 
Techniques for Agile Performance Testing
Techniques for Agile Performance TestingTechniques for Agile Performance Testing
Techniques for Agile Performance Testing
 
Security Testing for Testing Professionals
Security Testing for Testing ProfessionalsSecurity Testing for Testing Professionals
Security Testing for Testing Professionals
 
Test Automation Patterns
Test Automation PatternsTest Automation Patterns
Test Automation Patterns
 
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
Creating a Better Testing Future: The World Is Changing and We Must Change Wi...
 
Application Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal ApproachApplication Performance Testing: A Simplified Universal Approach
Application Performance Testing: A Simplified Universal Approach
 
A Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance TestingA Funny Thing Happened on the Way to User Acceptance Testing
A Funny Thing Happened on the Way to User Acceptance Testing
 
Testing Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaSTesting Cloud Services: SaaS, PaaS, and IaaS
Testing Cloud Services: SaaS, PaaS, and IaaS
 

Similar a The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More

The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationTechWell
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreTechWell
 
Atagg2015 Where testing is moving in agile cloud world!
Atagg2015 Where testing is moving in agile cloud world!Atagg2015 Where testing is moving in agile cloud world!
Atagg2015 Where testing is moving in agile cloud world!Agile Testing Alliance
 
When Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdWhen Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdTechWell
 
Lean Production Meets Big Data: A Next Generation Use Case
Lean Production Meets Big Data: A Next Generation Use CaseLean Production Meets Big Data: A Next Generation Use Case
Lean Production Meets Big Data: A Next Generation Use CaseDatameer
 
The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...TechWell
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationTechWell
 
Success Factors of FOSS Adoption
Success Factors of FOSS AdoptionSuccess Factors of FOSS Adoption
Success Factors of FOSS AdoptionAlexei Fedotov
 
Increase your Service Advantage: Innovations in Agent Hiring
Increase your Service Advantage: Innovations in Agent HiringIncrease your Service Advantage: Innovations in Agent Hiring
Increase your Service Advantage: Innovations in Agent HiringHireIQ Solutions, Inc.
 
Helpful Practices in Agile Testing
Helpful Practices in Agile TestingHelpful Practices in Agile Testing
Helpful Practices in Agile TestingJosiah Renaudin
 
Quality Not Compromise; Best Practices for Automated Testing
Quality Not Compromise; Best Practices for Automated TestingQuality Not Compromise; Best Practices for Automated Testing
Quality Not Compromise; Best Practices for Automated TestingRachel Maxwell
 
[Webinar] Visa's Journey to a Culture of Experimentation
[Webinar] Visa's Journey to a Culture of Experimentation[Webinar] Visa's Journey to a Culture of Experimentation
[Webinar] Visa's Journey to a Culture of ExperimentationOptimizely
 
The Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated TestingThe Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated TestingJames Briers
 
Use Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual TestingUse Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual TestingTechWell
 
jerry.metcalf.102516.pptx
jerry.metcalf.102516.pptxjerry.metcalf.102516.pptx
jerry.metcalf.102516.pptxtitatis74
 
Accelerate Testing in Agile through a Shared Business Domain Language
Accelerate Testing in Agile through a Shared Business Domain LanguageAccelerate Testing in Agile through a Shared Business Domain Language
Accelerate Testing in Agile through a Shared Business Domain LanguageTechWell
 

Similar a The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More (20)

The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test Automation
 
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and MoreThe Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
 
Atagg2015 Where testing is moving in agile cloud world!
Atagg2015 Where testing is moving in agile cloud world!Atagg2015 Where testing is moving in agile cloud world!
Atagg2015 Where testing is moving in agile cloud world!
 
When Testers Feel Left Out in the Cold
When Testers Feel Left Out in the ColdWhen Testers Feel Left Out in the Cold
When Testers Feel Left Out in the Cold
 
Lean Production Meets Big Data: A Next Generation Use Case
Lean Production Meets Big Data: A Next Generation Use CaseLean Production Meets Big Data: A Next Generation Use Case
Lean Production Meets Big Data: A Next Generation Use Case
 
The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...The Survey Says: Testers Spend Their Time Doing...
The Survey Says: Testers Spend Their Time Doing...
 
Introducing Keyword-driven Test Automation
Introducing Keyword-driven Test AutomationIntroducing Keyword-driven Test Automation
Introducing Keyword-driven Test Automation
 
Success Factors of FOSS Adoption
Success Factors of FOSS AdoptionSuccess Factors of FOSS Adoption
Success Factors of FOSS Adoption
 
Creating Lean Six Sigma organizations
Creating Lean Six Sigma organizationsCreating Lean Six Sigma organizations
Creating Lean Six Sigma organizations
 
Increase your Service Advantage: Innovations in Agent Hiring
Increase your Service Advantage: Innovations in Agent HiringIncrease your Service Advantage: Innovations in Agent Hiring
Increase your Service Advantage: Innovations in Agent Hiring
 
Mussadique Resume
Mussadique ResumeMussadique Resume
Mussadique Resume
 
Helpful Practices in Agile Testing
Helpful Practices in Agile TestingHelpful Practices in Agile Testing
Helpful Practices in Agile Testing
 
Quality Not Compromise; Best Practices for Automated Testing
Quality Not Compromise; Best Practices for Automated TestingQuality Not Compromise; Best Practices for Automated Testing
Quality Not Compromise; Best Practices for Automated Testing
 
[Webinar] Visa's Journey to a Culture of Experimentation
[Webinar] Visa's Journey to a Culture of Experimentation[Webinar] Visa's Journey to a Culture of Experimentation
[Webinar] Visa's Journey to a Culture of Experimentation
 
The Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated TestingThe Leaders Guide to Getting Started with Automated Testing
The Leaders Guide to Getting Started with Automated Testing
 
Use Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual TestingUse Automation to Assist—Not Replace—Manual Testing
Use Automation to Assist—Not Replace—Manual Testing
 
jerry.metcalf.102516.pptx
jerry.metcalf.102516.pptxjerry.metcalf.102516.pptx
jerry.metcalf.102516.pptx
 
Accelerate Testing in Agile through a Shared Business Domain Language
Accelerate Testing in Agile through a Shared Business Domain LanguageAccelerate Testing in Agile through a Shared Business Domain Language
Accelerate Testing in Agile through a Shared Business Domain Language
 

Más de TechWell

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and RecoveringTechWell
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization TechWell
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTechWell
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartTechWell
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyTechWell
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowTechWell
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityTechWell
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyTechWell
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTechWell
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipTechWell
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsTechWell
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GameTechWell
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsTechWell
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationTechWell
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessTechWell
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateTechWell
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessTechWell
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTechWell
 
Scale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayScale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayTechWell
 

Más de TechWell (20)

Failing and Recovering
Failing and RecoveringFailing and Recovering
Failing and Recovering
 
Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization Instill a DevOps Testing Culture in Your Team and Organization
Instill a DevOps Testing Culture in Your Team and Organization
 
Test Design for Fully Automated Build Architecture
Test Design for Fully Automated Build ArchitectureTest Design for Fully Automated Build Architecture
Test Design for Fully Automated Build Architecture
 
System-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good StartSystem-Level Test Automation: Ensuring a Good Start
System-Level Test Automation: Ensuring a Good Start
 
Build Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test StrategyBuild Your Mobile App Quality and Test Strategy
Build Your Mobile App Quality and Test Strategy
 
Implement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlowImplement BDD with Cucumber and SpecFlow
Implement BDD with Cucumber and SpecFlow
 
Develop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your SanityDevelop WebDriver Automated Tests—and Keep Your Sanity
Develop WebDriver Automated Tests—and Keep Your Sanity
 
Ma 15
Ma 15Ma 15
Ma 15
 
Eliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps StrategyEliminate Cloud Waste with a Holistic DevOps Strategy
Eliminate Cloud Waste with a Holistic DevOps Strategy
 
Transform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOpsTransform Test Organizations for the New World of DevOps
Transform Test Organizations for the New World of DevOps
 
The Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—LeadershipThe Fourth Constraint in Project Delivery—Leadership
The Fourth Constraint in Project Delivery—Leadership
 
Resolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile TeamsResolve the Contradiction of Specialists within Agile Teams
Resolve the Contradiction of Specialists within Agile Teams
 
Pin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile GamePin the Tail on the Metric: A Field-Tested Agile Game
Pin the Tail on the Metric: A Field-Tested Agile Game
 
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile TeamsAgile Performance Holarchy (APH)—A Model for Scaling Agile Teams
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
 
A Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps ImplementationA Business-First Approach to DevOps Implementation
A Business-First Approach to DevOps Implementation
 
Databases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery ProcessDatabases in a Continuous Integration/Delivery Process
Databases in a Continuous Integration/Delivery Process
 
Mobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to AutomateMobile Testing: What—and What Not—to Automate
Mobile Testing: What—and What Not—to Automate
 
Cultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for SuccessCultural Intelligence: A Key Skill for Success
Cultural Intelligence: A Key Skill for Success
 
Turn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile TransformationTurn the Lights On: A Power Utility Company's Agile Transformation
Turn the Lights On: A Power Utility Company's Agile Transformation
 
Scale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development TodayScale: The Most Hyped Term in Agile Development Today
Scale: The Most Hyped Term in Agile Development Today
 

Último

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 

Último (20)

Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 

The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More

  • 1. MB Full-day Tutorials 5/5/2014 8:30:00 AM The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More Presented by: Hans Buwalda LogiGear Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Hans Buwalda LogiGear Hans Buwalda has been working with information technology since his high school years. In his thirty year career, Hans has gained experience as a developer, manager, and principal consultant for companies and organizations worldwide. He was a pioneer of the keyword approach to testing and automation, now widely used throughout the industry. His approaches to testing, like Action Based Testing and Soap Opera Testing, have helped a variety of customers achieve scalable and maintainable solutions for large and complex testing challenges. Hans is a frequent speaker at STAR conferences and is lead author of Integrated Test Design and Automation: Using the Testframe Method.
  • 3. 4/10/2014 1 © 2014 LogiGear Corporation. All Rights Reserved Hans Buwalda LogiGear Automation, Virtualization, Outsourcing, and More STAREAST 2014, Tutorial MB Orlando, Monday May 5, 2014 The Challenges of BIG Testing © 2014 LogiGear Corporation. All Rights Reserved Introduction − industries − roles in testing
  • 4. 4/10/2014 2 © 2014 LogiGear Corporation. All Rights Reserved About LogiGear Software testing company, around since 1994 Testing and test automation expertise, services and tooling − consultancy, training − test development and automation services − "test integrated" development services Aims to be thought leader, in particular for large and complex test projects Products: − TestArchitect™, TestArchitect for Visual Studio™ − integrating test development with test management and automation − based on modularized keyword-driven testing www.logigear.com www.testarchitect.com © 2014 LogiGear Corporation. All Rights Reserved About Hans Dutch guy, living and working in California since 2001, as CTO of LogiGear Background in math, computer science, management Original career in management consultancy, since 1994 focusing on testing and test automation − keywords, agile testing, big testing, . . . www.happytester.com hans @ logigear.com
  • 5. 4/10/2014 3 © 2014 LogiGear Corporation. All Rights Reserved Topics for today Automation Designing and organizing tests Executing tests Team, organization and process Off-shoring, globalization © 2014 LogiGear Corporation. All Rights Reserved What is "BIG" Big efforts in development, automation, execution and/or follow up It takes a long time and/or large capacity to run tests (lot of tests, lot of versions, lot of configurations, ...) Scalability, short term and long term Complexity, functional, technical, scale Number and diversity of players and stakeholders Various definitions of "big" possible... and relevant... − "10 machines" or "10 acres" − "1000 tests" or "1000 weeks of testing" Big today means: big for you − not trivial, you need to think about it "Windows 8 has undergone more than 1,240,000,000 hours of testing" Steven Sinofsky, Microsoft, 2012
  • 6. 4/10/2014 4 © 2014 LogiGear Corporation. All Rights Reserved Existential Questions Why test? Why not test? Why automate tests? Why not automate tests? © 2014 LogiGear Corporation. All Rights Reserved Why test? People expect us to do Somebody wants us to Increases certainty and control − Showing absence of problems Finds faults, saving time, money, damage − Showing presence of problems
  • 7. 4/10/2014 5 © 2014 LogiGear Corporation. All Rights Reserved Why not test? It costs time and money You might find problems . . . We forgot to plan for it We need the resources for development It is difficult It's hard to manage © 2014 LogiGear Corporation. All Rights Reserved Why Automate Tests? It is more fun Can save time and money − potentially improving time-to-market Can capture key application knowledge in a re- usable way Consolidates a structured way of working − when established as integral part of system development process Can speeds up development life cycles Execution typically is more reliable − a robot is not subjective
  • 8. 4/10/2014 6 © 2014 LogiGear Corporation. All Rights Reserved The Power of Robot Perception FINISHED FILES ARE THE RE SULT OF YEARS OF SCIENTI FIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS... © 2014 LogiGear Corporation. All Rights Reserved Why not Automate? Can rule out the human elements − promotes "mechanical" testing − might not find "unexpected" problems More sensitive to good practices − pitfalls are plentiful Creates more software to manage Needs/uses technical expertise in the test team Tends to dominate the testing process − at the cost of good test development maintenance can crush automation...
  • 9. 4/10/2014 7 © 2014 LogiGear Corporation. All Rights Reserved Olny srmat poelpe can raed tihs. I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. The Power of Human Perception © 2014 LogiGear Corporation. All Rights Reserved The Power of Human Perception Notice at an event: "Those who have children and don't know it, there is a nursery downstairs." In a New York restaurant: "Customers who consider our waiters uncivil ought to see the manager." In a bulletin: "The eighth-graders will be presenting Shakespeare's Hamlet in the basement Friday at 7 PM. You are all invited to attend this drama." In the offices of a loan company: "Ask about our plans for owning your home." In the window of a store: "Why go elsewhere and be cheated when you can come here?"
  • 10. 4/10/2014 8 © 2014 LogiGear Corporation. All Rights Reserved About tests in big projects Regular tests may be activities, complex tests are products. In fact any test that you want to run more than once is a product Every test that is written down with sufficient detail should be automated Automation − No longer an option in most situations − Also a key prerequisite of most agile approaches How tests are written and automated can make or break large scale testing © 2014 LogiGear Corporation. All Rights Reserved Keywords (actions) to help scalability The test developer creates tests using keywords I call them "actions", with each action having a keyword and arguments The automation task focuses on automating the keywords Each keyword is automated only once 3 actions, each with an action keyword and arguments read from top to bottom prod id name stock new product hamm hammer 5 prod id quantity add to stock hamm 20 prod id expected check stock hamm 25
  • 11. 4/10/2014 9 © 2014 LogiGear Corporation. All Rights Reserved Potential benefits of keywords More productive: more tests, better tests − more breadth − more depth Easier to read and understand − no program code, tests can be self-documenting − facilitates involvement of non-technical people, like domain experts Fast, results can be quickly available − the design directly drives the automation More targeted efforts for the automation engineers − less repetition, test design helps in creating a structured solution (factoring) − can focus on critical and complex automation challenges more easily Automation can be made more stable and maintainable − limited and manageable impact of changes in the system under test A significant portion of the tests can typically be created early in a system life cycle − dealing with execution details later . . . © 2014 LogiGear Corporation. All Rights Reserved Case: International Financial Project One of the largest projects to date with action words Over 10000 windows, meant for use in 85 countries Long development cycle (400 pp, 4 years and counting) Maintenance very hard Testing major bottleneck Much investment in automation techniques were needed to become successful Also a lot of attention for team and work environment helped the success Team of 35 test developers, 2 automation engineers
  • 12. 4/10/2014 10 © 2014 LogiGear Corporation. All Rights Reserved Risks of keywords Keywords are often seen as silver bullet − often treated as a technical "trick", complications are underestimated The method needs understanding and experience to be successful − pitfalls are many, and can have a negative effect on the outcome − some of the worst automation projects I've seen were with keywords Testers might get pushed into half-baked automation role − risk: you loose a good tester and gain a poor programmer − focus may shift from good (lean and mean) testing to "getting automation to work" − the actual automation challenges are better left to a the experienced automation professionals Lack of method and structure can risk manageability − maintainability may not as good as hoped − tests may turn out shallow and redundant © 2014 LogiGear Corporation. All Rights Reserved Case: A Financial Project Large project, with a large automation team Early adopter of the keywords idea However, the keywords were applied with little method or direction Management did not see a need for help/coaching Result: many hard to maintain tests − crushing complexity − redundant tests and actions − the method was abandoned
  • 13. 4/10/2014 11 © 2014 LogiGear Corporation. All Rights Reserved Keywords need a method By themselves keywords don't provide much scalability − they can even backfire and make automation more cumbersome − a method can help tell you which keywords to use when, and how to organize the process Today we'll look at Action Based Testing (ABT) − addresses test management, test development and automation − large focus on test design as the main driver for automation success Central deliveries in ABT are the "Test Modules" − developed in spreadsheets − each test module contains "test objectives" and "test cases" − each test module is a separate (mini) project, each test module can involve different stake holders © 2014 LogiGear Corporation. All Rights Reserved Example of an ABT test module Consists of an (1) initial part, (2) test cases and (3) a final part Focus is on readability, and a clear scope Navigation details are avoided, unless they're meant to be tested TEST MODULE Car Rental Payments user start system jdoe TEST CASE TC 01 Rent a car first name last name car weeks enter rental Mary Renter Ford Escape 2 last name amount check payment Renter 140.42 FINAL close application
  • 14. 4/10/2014 12 © 2014 LogiGear Corporation. All Rights Reserved Example of a "low level" test module In "low level" tests interaction details are not hidden, since they are the target of the test The right level of abstraction depends on the scope of the test, and is an outcome of your test design process TEST MODULE Screen Flow user start system john TEST CASE TC 01 "New Order" button first name control click main new order window check window exists new order FINAL close application © 2014 LogiGear Corporation. All Rights Reserved Re-use actions to make new actions In the below example we use another sheet, but if you code actions, you could do something similar Often low level tests are re-used into these action definitions ACTION DEFINITION check payment user default value argument last name Jones argument amount window control value enter main last name # last name window control click main view balance window control expected check main balance # amount
  • 15. 4/10/2014 13 © 2014 LogiGear Corporation. All Rights Reserved High Level Test Design - Test Development Plan Objectives Test Module 1 Test Cases Test Module 2 Test Module N Actions . . . AUTOMATION Objectives Objectives interaction test business test Overview Action Based Testing define the "chapters" create the "chapters" create the "words" make the words work Test Cases Test Cases window control value enter log in user name jdoe enter log in password car guy window control property expected check property log in ok button enabled true user password log in jdoe car guy first last brand model enter rental Mary Renter Ford Escape last total check bill Renter 140.42 © 2014 LogiGear Corporation. All Rights Reserved Case: Stock Exchange Transition from floor-based to screen-based trade Created on basis of an existing standard package − result: very little specifications Consisting of four major, different, systems that need to work in real- time Failures and bugs are not an option: − core of the financial system of the country, 100K revenue per second − traders not necessarily following rules In-depth knowledge limited to four people − nicknamed "The Four Daltons", after characters in a French comic book series about the wild west − none of the four Daltons was involved in testing, testing was in a vacuum Three months to go... − test development (and scripted automation) had failed − test department not cooperating well with developers and domain experts − internal and external auditors had raised the alarm − and... the Dutch Crown Prince was scheduled to put the system into use!! The Four Daltons (French comic book characters)
  • 16. 4/10/2014 14 © 2014 LogiGear Corporation. All Rights Reserved Case: Stock Exchange Test set: − make it comprehensive − make it in-depth and aggressive − make it easy to assess and approve Organization: − get the right people involved (testing, automation, etc) − use scarce resources efficiently (in particular the four Daltons) − work with stake holders to let the process be transparent Technical: − use of the keyword method ("action words") − use "test objectives" so auditors can see quickly what you're testing − use great test design, don't mix apples and oranges "Sign off lubrication": − auditors signed off on the tests, not the test results − "the test is complete", not "the system works well" Results: − deadline was met one day before final date − the automated tests were the only ones used for acceptance − no functional errors found afterwards © 2014 LogiGear Corporation. All Rights Reserved Question What is wrong with the following pictures?
  • 17. 4/10/2014 15 © 2014 LogiGear Corporation. All Rights Reserved No Y2K Problems in Auckland Airport?? © 2014 LogiGear Corporation. All Rights Reserved Anything wrong with this instruction ? You should change your battery or switch to outlet power immediately to keep from losing your work.
  • 18. 4/10/2014 16 © 2014 LogiGear Corporation. All Rights Reserved Why Better Test Design? Many tests are often mechanical now − blindly follows specs or reqs − which often suits ok, but lacks aggression − no combinations, no unexpected situations − "methodical" does not have to mean "mechanical" For a higher “ambition level” you need − understanding of the system under test, and the business under test − analytical understanding of what could go wrong − creativity, and the commitment to use it Poor test development results in − cumbersome automation due to lack of focus − tedious retest cycles, loosing the agile advantage Are you suffering from lame tests too? © 2014 LogiGear Corporation. All Rights Reserved Effects of Better Test Design Quality and manageability of test − many tests are often quite "mechanical" now − one to one related to specifications, user stories or requirements, which often is ok, but lacks aggression − no combinations, no unexpected situations, lame and boring − such tests have a hard time finding (interesting) bugs Better automation ! − when unneeded details are left out of tests, they don't have to be maintained − avoiding "over checking": creating checks that are not in the scope of a test, but may fail after system changes − limit the impact of system changes on tests, making such impact more manageable I have become to believe that successful automation is less of a technical challenge as it is a test design challenge.
  • 19. 4/10/2014 17 © 2014 LogiGear Corporation. All Rights Reserved Case for organizing tests in BIG projects Can help keep the volume down Isolate the complexities Efficient and re-usable automation Deal with changing requirements For example: much of tested subject matter is not system specific, but business specific − a mortgage is a mortgage © 2014 LogiGear Corporation. All Rights Reserved The Three “Holy Grails” of Test Design Metaphor to depict three main steps in test design Using "grail" to illustrate that there is no one perfect solution, but that it matters to pay attention (to search) About quality of tests, but most of all about scalability and maintainability in BIG projects Right approach for each test module Proper level of detail in the test specification Organization of tests into test modules
  • 20. 4/10/2014 18 © 2014 LogiGear Corporation. All Rights Reserved What's the trick... © 2014 LogiGear Corporation. All Rights Reserved What's the trick... Have or acquire facilities to store and organize your content Select your stuff Decide where to put what − assign and label the shelves Put it there If the organization is not sufficient anymore, add to it or change it
  • 21. 4/10/2014 19 © 2014 LogiGear Corporation. All Rights Reserved Properties of a good Breakdown Test modules are well differentiated and clear in scope Reflects the level of tests Balanced in size and amount Modules are mutually independent Fit the priorities and planning of the project © 2014 LogiGear Corporation. All Rights Reserved Breakdown Criteria Straightforward Criteria − Functionality (customers, finances, management information, UI, ...) − Architecture of the system under test (client, server, protocol, sub systems, components, modules, ...) − Kind of test (navigation flow, negative tests, response time, ...) Additional Criteria − Stakeholders (like "Accounting", "Compliance", "HR", ...) − Complexity of the test (put complex tests in separate modules) − Technical aspects of execution (special hardware, multi-station, ...) − Overall project planning (availability of information, timelines, sprints, ...) − Risks involved (extra test modules for high risk areas) − Ambition level (smoke test, regression, aggressive, )
  • 22. 4/10/2014 20 © 2014 LogiGear Corporation. All Rights Reserved What is probably not a good design Navigational and functional tests are mixed − for example "over checking": a test of a premium calculation also checks the existence of a window You have to change all of them for every new release of the system under test All test modules have a similar design Test modules are dependent on each other You can’t start developing any test modules early in the life cycle © 2014 LogiGear Corporation. All Rights Reserved Symptoms Tediousness in the test and test automation process No sense of control Complaining people Unnecessary high test maintenance − changes in the system under test impact many tests − hard to understand which tests need to be modified Difficulties in running any test − teams start "debugging" tests
  • 23. 4/10/2014 21 © 2014 LogiGear Corporation. All Rights Reserved Breakdown examples CRUD tests (Create, Read, Update, Delete) for all entity types in the app − like "order", "customer", "well", etc − for all: various types and situations Forms, value entry − does each form work (try to test form by form, not entity by entity) − mandatory and optional fields, valid and invalid values, etc − UI elements and their properties and contents − function keys, tab keys, special keys, etc Screen and transaction flows − like cancel an order, menu navigation, use a browser back and forward buttons, etc − is the data in the database correct after each flow Business transactions, business rules − identify situations that the tests need to try Function tests, do individual functions work − can I count orders, can I calculate a discount, etc End-to-end tests − like enter sale order, then check inventory and accounting Tests with specific automation needs − like multi station tests Tests of non-UI functions High ambition tests (aggressive tests) − can I break the system under test © 2014 LogiGear Corporation. All Rights Reserved Identifying the modules Step 1: top down: establish main structure: analyze what the business is and what the system does? how is it technically organized? do other “primary criteria” apply? use the list in the "breakdown examples" slide as a starting point − see it as "low hanging fruit": items that tend to apply well in many projects also visit the “secondary criteria” − not always applicable, but can help to refine the design further Step 2: bottom up: refine, complete: study individual functionalities and checks (like from exist test cases) and identify test modules for them if needed identify and discuss any additional criteria and needed testing situations review and discuss the resulting list(s) of test modules create some early drafts of test modules and adjust the list if needed Repeat steps 1 and 2 if needed.
  • 24. 4/10/2014 22  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Status Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass Formula Editor buttons, operations, undo 3 pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed Model Store in Repository presence, re-run, delete 1 pass Repository UI example: selecting an item shows its description 2 errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass Graphics graphical representation of various data types and data sets 1 pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass Administration users, projects, authorization 1 pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass Modeler UI various controls, panels, tabs 2 pass  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Status Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass Formula Editor buttons, operations, undo 3 pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed Model Store in Repository presence, re-run, delete 1 pass Repository UI example: selecting an item shows its description 2 errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass Graphics graphical representation of various data types and data sets 1 pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass Administration users, projects, authorization 1 pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass Modeler UI various controls, panels, tabs 2 pass
  • 25. 4/10/2014 23  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Status Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass Formula Editor buttons, operations, undo 3 pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed Model Store in Repository presence, re-run, delete 1 pass Repository UI example: selecting an item shows its description 2 errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass Graphics graphical representation of various data types and data sets 1 pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass Administration users, projects, authorization 1 pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass Modeler UI various controls, panels, tabs 2 pass  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Status Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass Formula Editor buttons, operations, undo 3 pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed Model Store in Repository presence, re-run, delete 1 pass Repository UI example: selecting an item shows its description 2 errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass Graphics graphical representation of various data types and data sets 1 pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass Administration users, projects, authorization 1 pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass Modeler UI various controls, panels, tabs 2 pass
  • 26. 4/10/2014 24  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Status Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass Formula Editor buttons, operations, undo 3 pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed Model Store in Repository presence, re-run, delete 1 pass Repository UI example: selecting an item shows its description 2 errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass Graphics graphical representation of various data types and data sets 1 pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass Administration users, projects, authorization 1 pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass Modeler UI various controls, panels, tabs 2 pass  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Test Module Scope Prio Build 1 Build 2 Build 3 Model Life Cycles Create, store, delete Models (=formula + data), as part of SYS sessions 1 pass pass pass Result Life Cycles Create, store outputs. See them in the process store. 1 pass pass pass Formula Life Cycles Create, edit, manage, remove formulas 2 pass pass pass Formula Editor buttons, operations, undo 3 pass pass pass Repository display of the Modeler repository, presence of user formulas, drag and drop usage. Effect of changing repository folder (environment variable) 1 failed failed failed Model Store in Repository presence, re-run, delete 1 pass pass pass Repository UI example: selecting an item shows its description 2 errors errors errors Formula Evaluation Correctness of results, valid/invalid arguments, boundary analyses, special arguments 1 pass pass pass Built-in Formulas Presence, correctness, valid/invalid arguments, boundaries, special arguments, equivalence classes 1 pass pass pass Data Table Association Associate tabels view, change and remove associations, data applicability, for existing and defined formulas 2 pass pass pass Quick Access buttons Life cycle of Quick Access buttons, correctnes for the built-in ones 3 dev dev dev Formula arguments presence, argument types, argument entry, parameters, defaults 2 pass pass pass arguments for Built-in Formulas arguments, argument types and defaults for each pre-defined formula 2 failed failed failed Area Of Interest Relations defaulting, tree visibility, select/deselect, …. 1 pass pass pass Model Execution Model times, start, stop (cancel), restart ("chunks", "timeboxes", ... needs more information) 3 pass pass pass Graphics graphical representation of various data types and data sets 1 pass pass pass Graphics Viewing zoom, select, drag and drop (no 3d now) 1 pass pass pass Administration users, projects, authorization 1 pass pass pass Model results in central database storing, removing, using, correctness, … (there are some other applications, mostly legacy, that can do the same Models to compare) 1 pass pass pass Modeler UI various controls, panels, tabs 2 pass pass pass
  • 27. 4/10/2014 25 © 2014 LogiGear Corporation. All Rights Reserved Questions for Test Design Does your organization make something like a high level test design? If yes, how do you document it? © 2014 LogiGear Corporation. All Rights Reserved Case Study Large IT provider New version of one of their major web-sites Test scope was user acceptance test (functional acceptance) − the users were the “business owners” Development was off-shore
  • 28. 4/10/2014 26 © 2014 LogiGear Corporation. All Rights Reserved Case Study Test development was done separate from automation − time-line for test development: May – Oct − time-line for automation (roughly): Jan – Feb All tests were reviewed and approved by the business owners − acceptance was finished by the end of the test development cycle © 2014 LogiGear Corporation. All Rights Reserved Example of a Test Development Plan Nr Module Business Owner Date to BO 1 Portal Navigation, Audience Robyn Peterson 05 / 23 2 Portal Navigation, Search Ted Jones 05 / 27 3 Membership, registration Steve Shao 06 / 03 4 Portal Navigation, Category Ted Jones 06 / 08 5 Portal Navigation, Topic and Expert Ted Jones 06 / 13 6 Access Control Mike Soderfeldt 06 / 17 7 Portal Navigation, Task Ted Jones 06 / 22 8 Contact DSPP Ted Jones 06 / 27 9 Portal search Mike Soderfeldt 07 / 01 10 Membership, review and update Steve Shao 07 / 05 11 Program contact assignment Alan Lai 07 / 11 12 Company, registration Steve Shao 07 / 14 13 Catalog, view and query Robyn Peterson 07 / 19 14 Site map Ted Jones 07 / 25 15 Membership, affiliation Steve Shao 07 / 28 16 Learn about DSPP Ted Jones 08 / 01 17 Products and services Steve Shao, Robyn Peterson 08 / 08 18 What's new Ted Jones 08 / 11 19 Company, life cycle Steve Shao, Alan Lai 08 / 17 20 Specialized programs Ted Jones, Steve Shao 08 / 22 21 Customer surveys Ted Jones 08 / 29 22 Software downloads Mike Soderfeldt 09 / 01 23 Newsletters Ted Jones 09 / 06 24 Internationalization and localization Ted Jones 09 / 13 25 Membership, life cycles Steve Shao 09 / 19 26 Collaboration, forums Ted Jones 09 / 23 27 Collaboration, blogs Mike Soderfeldt 09 / 28 28 Collaboration, mailing lists Ted Jones 10 / 03
  • 29. 4/10/2014 27 © 2014 LogiGear Corporation. All Rights Reserved Review Process with Stake Holders Test Team sends draft Module to Stake Holder Stake Holder reviews: - coverage - correctness Stake Holder returns notes: - additions - corrections Test Team receives and processes notes changes needed? Stake Holder returns notice of approval Test Team marks the Module as "Final" END no yes START © 2014 LogiGear Corporation. All Rights Reserved Case Study, Results All tests were developed and reviewed on schedule − many notes and questions during test development phase The automation was 100% of the tests − all actions were automated, thus automating all test modules The test development took an estimated 18 person months − one on-shore resource, two off-shore resources The automation took between one and two months − focused on actions − most time was spent in handling changes in the interface (layout of pages etc)
  • 30. 4/10/2014 28 © 2014 LogiGear Corporation. All Rights Reserved Case: The French Director Mid size company Struggling under high pressure Testing of their main product, standard financial software Control and priority main issue Unfamiliar business culture Main instrument: module break down © 2014 LogiGear Corporation. All Rights Reserved Test Modules versus Test Cases The test module is a bigger unit in the test design − easier to identify − a chapter rather than a paragraph − easier to plan and manage, as a product (can be treated as part of product backlog in scrum projects) Better flow of execution − each test case can set up for the next one − keep test modules independent, test cases can be dependent Test cases become creative output, rather than stifling input − avoids having to define all test cases at once early in the process Clear scope helps to identify cases, actions and checks − using "test objectives" to further detail scope − had a significant effect on maintainability
  • 31. 4/10/2014 29 © 2014 LogiGear Corporation. All Rights Reserved "Thou Shall Not Debug Tests..." Large and complex test projects can be hard to "get to run" If they are however, start with taking a good look again at your test design... Rule of thumb: don't debug tests. If tests don't run smoothly, make sure: − lower level tests have been successfully executed first -> UI flow in the AUT is stable − actions and interface definitions have been tested sufficiently with their own test modules -> automation can be trusted − are you test modules not too long and complex? © 2014 LogiGear Corporation. All Rights Reserved What about existing tests? Compare to moving house: − some effort can't be avoided − be selective, edit your stuff, • look at the future, not the past − first decide where to put what, then put it there − moving is an opportunity, you may not get such chance again soon Follow the module approach − define the modules and their scope as if from scratch − use the existing test cases in two ways: • verify completeness • harvest and re-use them for tests and for actions − avoid porting over "step by step", in particular avoid over-checking
  • 32. 4/10/2014 30  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2011 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Defining test runs using "test suites" Build Acceptance Test Smoke Test System Test FunctionalAcceptance Test Integration Test © 2014 LogiGear Corporation. All Rights Reserved Grail 2: Approach per Test Module Plan the test module: − when to develop: is enough specification available − when to execute: make sure the functionality at action level is well- tested and working already Process: − do an intake: understand what is needed and devise an approach − analyze of requirements − formulate "test objectives" − create "test cases" Identify stakeholders and their involvement: − users, subject matter experts − developers − auditors Choose testing techniques if applicable: − boundary analysis, decision tables, etc
  • 33. 4/10/2014 31 © 2014 LogiGear Corporation. All Rights Reserved Eye on the ball, Scope Always know the scope of the test module The scope should be unambiguous The scope determines many things: − what the test objectives are − which test cases to expect − what level of actions to use − what the checks are about and which events should generate a warning or error (if a “lower” functionality is wrong) © 2014 LogiGear Corporation. All Rights Reserved What I have seen not work Lots of detailed steps when those steps are not the focus of the test All actions high level, or all actions low level "Over-Checking": having checks that don't fit the scope of the test Over-use of data externalization (data driven) with values coming from files or data sets without a clear testing reason Combinatorial explosions: test all ... for all ... in all ... Many tests for forms and dialogs, little tests for business processes
  • 34. 4/10/2014 32  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. An example test for the Modeler model name arguments formula create model vegas winner x 10*x argument value set argument x some money model name expected run model vegas winner a lot more money  2009 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved.© 2014 LogiGear Corporation. All rights reserved. Too detailed? Step Name Description Expected step 16 Click the new formula button to start a new calculation. The current formula is cleared. If it had not been save a message will show step 17 Enter "vegas winner" in the name field The title will show "vegas winner" step 18 Open the formula editor by clicking the '+' button for the panel "formula editor" The formula editor will show with an empty formula (only comment lines) step 19 Add some lines and enter "10*x;" The status bard will show "valid formula". There is a "*" marker in the title step 20 Click the Save formula button The formula is saved, the "*" will disappear from the title step 21 Open the panel with the arguments by clicking the '+' button There two lines, for 'x' and 'y' step 22 Click on the value type cell and select "currency" A button to select a currency appears, with default USD step 23 Click on the specify argument values link The argument specification dialog is shown
  • 35. 4/10/2014 33 © 2014 LogiGear Corporation. All Rights Reserved Detail out the scope with test objectives ... TO-3.51 The exit date must be after the entry date ... test objective TO-3.51 name entry date exit date enter employment Bill Goodfellow 2002-10-02 2002-10-01 check error message The exit date must be after the entry date. © 2014 LogiGear Corporation. All Rights Reserved Examples of Testing Techniques Equivalence class partitioning − any age between 18 and 65 Boundary condition analysis − try 17, 18, 19 and 64, 65, 66 Error guessing − try Cécile Schäfer to test sorting of a name list Exploratory − "Exploratory testing is simultaneous learning, test design, and test execution", James Bach, www.satisfice.com Error seeding − deliberately injecting faults in a test version of the system, to see if the tests catch them − handle with care, don't let the bugs get into the production version Decision tables − define possible situations and the expected responses of the system under test State transition diagrams − identify "states" of the system, and have your tests go through each transition between states at least once Jungle Testing − focus on unexpected situations, like hacking attacks Soap Opera Testing − describe typical situations and scenarios in the style of episodes of a soap opera, with fixed characters − high density of events, exaggerated − make sure the system under test can still handle these
  • 36. 4/10/2014 34 © 2014 LogiGear Corporation. All Rights Reserved "Jungle Testing" Expect the unexpected − unexpected requests − unexpected situations (often data oriented) − deliberate attacks − how does a generic design respond to a specific unexpected event? Difference in thinking − coding bug: implementation is different from what was intended/specified − jungle bug: system does not respond well to an unexpected situation To address − study the matter (common hack attacks, ...) − make a risk analysis − make time to discuss about it (analysis, brainstorm) − involve people who can know − use "exploratory testing" (see James Bach's work on this) − use an agile approach for test development − consider randomized testing, like "monkey" testing New York. The city of a million stories. Half of them are true, the other half just haven't happened yet -- Dr Who © 2014 LogiGear Corporation. All Rights Reserved Soap Opera Testing Informal scenario technique to invite subject-matter experiences into the tests, and efficiently address multiple objectives Using a recurring theme, with “episodes” About “real life” But condensed And more extreme Typically created with a high involvement of end-users and/or subject-matter experts It can help create a lot of tests quickly, and in an agile way
  • 37. 4/10/2014 35 © 2014 LogiGear Corporation. All Rights Reserved Lisa Crispin: Disorder Depot . . . There are 20 preorders for George W. Bush action figures in "Enterprise", the ERP system, awaiting the receipt of the items in the warehouse. Finally, the great day arrives, and Jane at the warehouse receives 100 of the action figures as available inventory against the purchase order. She updates the item record in Enterprise to show it is no longer a preorder. Some time passes, during which the Enterprise background workflow to release preorders runs. The 20 orders are pick-released and sent down to the warehouse. Source: Hans Buwalda, Soap Opera Testing (article), Better Software Magazine, February 2005 © 2014 LogiGear Corporation. All Rights Reserved Lisa Crispin: Disorder Depot . . . Then Joe loses control of his forklift and accidentally drives it into the shelf containing the Bush action figures. All appear to be shredded to bits. Jane, horrified, removes all 100 items from available inventory with a miscellaneous issue. Meanwhile, more orders for this very popular item have come in to Enterprise. Sorting through the rubble, Jane and Joe find that 14 of the action figures have survived intact in their boxes. Jane adds them back into available inventory with a miscellaneous receipt.
  • 38. 4/10/2014 36 © 2014 LogiGear Corporation. All Rights Reserved Lisa Crispin: Disorder Depot . . . This scenario tests • Preorder process • PO receipt process • Miscellaneous receipt and issue • Backorder process • Pick-release process • Preorder release process • Warehouse cancels © 2014 LogiGear Corporation. All Rights Reserved Vary your tests? Automated tests have a tendency to be rigid, and predictable Real-world situations are not necessarily predictable Whenever possible try to vary: − with select other data cases that still fit the goal of tests − with randomized behavior of the test
  • 39. 4/10/2014 37 © 2014 LogiGear Corporation. All Rights Reserved Generation and randomization techniques Model-based − use models of the system under test to create tests − see: Harry Robinson, www.model-based-testing.org, and Hans Buwalda, Better Software, March 2003 Data driven testing − apply one test scenario to multiple data elements − either coming from a file or produce by an automation "Monkey testing" − use automation to generate random data or behavior − "smart monkeys" will follow typical user behavior, most helpful in efficiency − "dumb monkeys" are more purely random, may find more unexpected issues − long simulations can expose bugs traditional tests won't find Extended Random Regression − have a large database of tests − randomly select and run them, for a very long time − this will expose bugs otherwise hidden − see Cem Kaner e.a.: "High Volume Test Automation", StarEast 2004 © 2014 LogiGear Corporation. All Rights Reserved Data Driven Testing Separate test logic from the data Possible origins for the data: − earlier steps in the test − data table − randomizer, or other formula − external sources, like a database query Use "variables" as placeholders in the test case, instead of hard values Data driven is powerful, but use modestly: − value cannot be known at test time, or changes over time − having many data variations is meaningful for the test
  • 40. 4/10/2014 38 © 2014 LogiGear Corporation. All Rights Reserved Variables and expressions with keywords This test does not need an absolute number for the available cars, just wants to see if a stock is updated As a convention we denote an assignment with ">>" The "#" indicates an expression TEST CASE TC 02 Rent some more cars car available get quantity Chevvy Volt >> volts first name last name car rent car John Doe Chevvy Volt rent car John Doe Chevvy Volt car expected check quantity Chevvy Volt # volts - 2 © 2014 LogiGear Corporation. All Rights Reserved Data driven testing with keywords The test lines will be repeated for each row in the data set The values represented by "car", "first" and "last" come from the selected row of the data set TEST CASE TC 03 Check stocks data set use data set /cars car available get quantity # car >> quantity first name last name car rent car # first # last # car car expected check quantity # car # quantity - 1 repeat for data set DATA SET cars car first last Chevvy Volt John Doe Ford Escape Mary Kane Chrysler 300 Jane Collins Buick Verano Tom Anderson BMW 750 Henry Smyth Toyota Corolla Vivian Major
  • 41. 4/10/2014 39 © 2014 LogiGear Corporation. All Rights Reserved Combinations Input values − determine equivalence classes of values for a variable or field − for each class pick a value (or randomize) Options, settings Configurations − operating systems, operating system versions and flavors • Windows service packs, Linux distributions − browsers, browser versions − protocol stacks (IPv4, IPv6, USB, ...) − processors − DBMS's Combinations of all of the above Trying all combinations will spin out of control quickly © 2014 LogiGear Corporation. All Rights Reserved Pairwise versus exhaustive testing Group values of variables in pairs (or tuples with more than 2) Each pair (tuple) should occur in the test at least once − maybe not in every run, but at least once before you assume "done" − consider to go through combinations round-robin, for example pick a different combination every time you run a build acceptance test − in a NASA study: • 67 percent of failures triggered by a single value • 93 percent by two-way combinations, and • 98 percent by three-way combinations Example, configurations − operating system: Windows XP, Apple OS X, Red Hat Enterprise Linux − browser: Internet Explorer, Firefox, Chrome − processor: Intel, AMD − database: MySQL, Sybase, Oracle − 72 combinations possible, to test each pair: 10 tests Example of tools: − ACTS from NIST, PICT from Microsoft, AllPairs from James Bach (Perl) − for a longer list see: www.pairwise.org These techniques and tool are supportive only. Often priorities between platforms and values can drive more informed selection Source: PRACTICAL COMBINATORIAL TESTING, D. Richard Kuhn, Raghu N. Kacker, Yu Lei, NIST Special Publication 800-142, October, 2010
  • 42. 4/10/2014 40 © 2014 LogiGear Corporation. All Rights Reserved Grail 3: Specification Level, choosing actions Scope of the test determines the specification level As high level as appropriate, as little arguments as possible − Use default values for non-relevant arguments Clear names (usually verb + noun usually works well) − to standardize action names: standardize both the verbs and the nouns, so "check customer" versus "verify client" (or vice versa) − tests are not C++ code: avoid "technical habits", like mixed case and (worse) underlines Manage the Actions Document the Actions By-product of the test design © 2014 LogiGear Corporation. All Rights Reserved Case: American Bank Project for a new teller system Large, state of the art Many system releases, many adjustments Need for very high level of automation Over 1 million test lines, in over 650 test modules Initially little attention paid to "holy grails" − UI and functional tests in the same modules − virtually un-maintainable, came close to killing the project − test design forced upon the team by a powerful stakeholder who did not care much for methods... Emergency re-organization of the test modules − after system changes the tests would run again within a day
  • 43. 4/10/2014 41 © 2014 LogiGear Corporation. All Rights Reserved Example of using actions In this real world example the first "sequence number" for teller transactions for a given day is retrieved, using a search function • the "#" means an expression, in this case a variable with todays business date • the ">>" means: assign to a variable for use later on in the test key type key {F7} type key 3 page tab locate page tab Scan Criteria text check breadcrumb general functions > search window control value select search scan direction Backward window control value enter value search business date match # bus date source control click search go window control variable get search results sequence number >> seq num . . . © 2014 LogiGear Corporation. All Rights Reserved variable get sequence number >> seq num Example of using actions In this real world example the first "sequence number" for teller transactions for a given day is retrieved, using a search function • the "#" means an expression, in this case a variable with todays business date • the ">>" means: assign to a variable for use later on in the test
  • 44. 4/10/2014 42 © 2014 LogiGear Corporation. All Rights Reserved Low-level, high-level, mid-level actions Low-level: detailed interaction with the UI (or API) − generic, do not show any functional or business logic − examples: "click", "expand tree node", "select menu" High-level: represent a business function specific to the scope of the test − hide the interaction − examples: "enter customer", "rent car", "check balance" Mid-level: auxiliary actions that represent common sequences of low level actions − usually to wrap a form or dialog − greatly enhance maintainability − example: "enter address fields" enter customer enter address fields enter select set . . .. . . © 2014 LogiGear Corporation. All Rights Reserved Identifying controls Identify windows and controls, and assign names to them These names encapsulate the properties that the tool can use to identify the windows and controls when executing the tests
  • 45. 4/10/2014 43 © 2014 LogiGear Corporation. All Rights Reserved Mapping the interface An interface mapping (common in test tools) will map windows and controls to names When the interface of an application changes, you only have to update this in one place The interface mapping is a key step in your automation success, allocate time to design it well INTERFACE ENTITY library interface entity setting title {.*Music Library} ta name ta class label interface element title text Title: interface element artist text Artist: interface element file size text File size (Kb): ta name ta class position interface element playing time text textbox 4 interface element file type text textbox 5 interface element bitrate text textbox 6 ta name ta class position interface element music treeview treeview 1 © 2014 LogiGear Corporation. All Rights Reserved Some Tips to Get Stable Automation Make the system under test automation-friendly − consider this a key requirement ("must have") − screen elements have stable identifying properties − white box access to data, statuses, conditions, etc Use "active" timing, don't use hard coded waits Test the automation, in particular complex actions − before running tests with them Keep an eye on the test design
  • 46. 4/10/2014 44 © 2014 LogiGear Corporation. All Rights Reserved Look for properties a human user can't see, but a test tool can This approach can lead to speedier and more stable automation − interface mapping is often bottleneck, and source of maintenance problems − with predefined identifying property values in interface map can be created without "spy" tools − not sensitive to changes in the system under test − not sensitive to languages and localizations Examples: − "id" attribute for HTML elements − "name" field for Java controls − "AccessibleName" or "Automation ID" properties in .Net controls (see below) Automation-friendly design: hidden properties © 2014 LogiGear Corporation. All Rights Reserved Mapping the interface using hidden identifiers Instead of positions or language dependent labels, an internal property "automation id" has been used The interface definition will be less dependent on modifications in the UI of the application under test If the information can be agreed upon with the developers, for example in an agile team, it can be entered (or pasted) manually and early on INTERFACE ENTITY library interface entity setting automation id MusicLibraryWindow ta name ta class automation id interface element title text TitleTextBox interface element artist text SongArtistTextBox interface element file size text SizeTextBox interface element playing time text TimeTextBox interface element file type text TypeTextBox interface element bitrate text BitrateTextBox ta name ta class automation id interface element music treeview MusicTreeView
  • 47. 4/10/2014 45 © 2014 LogiGear Corporation. All Rights Reserved Active Timing Passive timing − wait a set amount of time − in large scale testing, try to avoid passive timing altogether: • if wait too short, test will be interrupted • if wait too long, time is wasted Active timing − wait for a measurable event − usually the wait is up to a, generous, maximum time − common example: wait for a window or control to appear (usually the test tool will do this for you) Even if not obvious, find something to wait for... Involve developers if needed − relatively easy in an agile team, but also in traditional projects, give this priority If using a waiting loop − make sure to use a "sleep" function in each cycle that frees up the processor (giving the AUT time to respond) − wait for an end time, rather then a set amount of cycles © 2014 LogiGear Corporation. All Rights Reserved Things to wait for... Wait for a last control or elements to load − developers can help knowing which one that is Non-UI criteria − API function − existence of a file Criteria added in development specifically for this purpose, like: − "disabling" big slow controls (like lists or trees) until they're done loading − API functions or UI window or control properties Use a "delta" approach: − every wait cycle, test if there was a change; if no change, assume that the loading time is over: − examples of changes: • the controls on a window • count of items in a list • size a file (like a log file)
  • 48. 4/10/2014 46 © 2014 LogiGear Corporation. All Rights Reserved Alternatives to UI automation ("non-GUI") A GUI (Graphical User Interface) is only one example of an interface for interaction with a system under test Examples − HTTP and XML based interfaces, like REST − application programming interfaces (API’s) − embedded software − protocols − files, batches − databases − command line interfaces (CLI’s) − multi-media − mobile devices In many cases non-GUI automation is used since there simply is not GUI, but it can also often speed things up: − tends to be more straightforward technically, little effort needed to build up or maintain − once it works, it tends to work much faster and more stably than GUI automation In BIG testing projects routinely: − identify which non-GUI alternatives are available − as part of test planning: identify which tests qualify for non-GUI automation © 2014 LogiGear Corporation. All Rights Reserved Tools that can help manage BIG projects Application Lifecycle Management (ALM) − abundant now, mainly on the wings of agile − very good for control, team cooperation, and traceability − often relate to IDE's (like Microsoft TFS and Visual Studio) − examples: Rally, Jira, TFS Test Managers − as separate tools on their way out − morphing into or replaced by ALM options − examples: HP Quality Center, Microsoft Test Manager Test development and automation tools − develop and/or automate tests • these are not the same, automation tools are not always so good for test development − examples are HP Quick Test Pro, Borland Silk, Selenium, FitNesse, Microsoft Coded UI, and LogiGear's TestArchitect and TestArchitect for Visual Studio (our own products) Build tools − succeed the traditional "make" tools − in particular "continuous build" tools combine "make" functionality with source control systems to rebuild components that have changed, either continuously or on set times, like nightly − can very well also run related tests (unit and functional), and act on the results (stop build, report, etc) − examples: Hudson, Jenkins, TFS Bug trackers − not only register issues, but also facilitate their follow up, with workflow features − often also part of other tools, and tend to get absorbed now by the ALMs − Examples: BugZilla, Mantis, Trac
  • 49. 4/10/2014 47 © 2014 LogiGear Corporation. All Rights Reserved Tooling and Traceability Reference item (ALM item, req, code module, ...) Test Objective Test Case Execution Result Test Module Bug, issue ALM, IDE, Project Mgr, Req Mgr Test Development Tool Automation Tool Execution Manager Continuous Build Tool Lab manager Issue Tracker ALM Testing Trace back © 2014 LogiGear Corporation. All Rights Reserved Test Execution Have an explicit approach for when and how to execute which tests Having a good high level test design will help to organize this Execution can be selective or integral − unit tests are typically executed selectively, possibly automatically based on code changes in a system like SVN or TFS − for functional tests, decisions are needed: • selective execution will be quicker and more efficient • integral execution may catch more issues ("bonus bugs") • generally extensive functional test execution will be related to releases, rather than code check ins − the ability to run "big testing" efficiently may determine how much can be done
  • 50. 4/10/2014 48 © 2014 LogiGear Corporation. All Rights Reserved Environments, configurations Many factors can influence details of automation − language, localization − hardware − version of the system under test − system components, like OS or browser Test design can reflect these − certain test modules are more general − others are specific, for example for a language But for tests that do not care about the differences, the automation just needs to "deal" with them − shield them from the tests © 2014 LogiGear Corporation. All Rights Reserved Capture variations of the system under test in the actions and interface definitions, rather than in the tests (unless relevant there). Can be a feature in a test playback tool, or something you do with a global variable or setting. Variation Variation Variation "Variations" "Master Switch" Actions, Interface Definitions . . .
  • 51. 4/10/2014 49 © 2014 LogiGear Corporation. All Rights Reserved Possible set up of variations linked variation keyworded variation Specify for example in a dialog when you start an execution: © 2014 LogiGear Corporation. All Rights Reserved Test Environments Physical • hardware • infrastructure • location • . . . Software • programs • data models • protocols • . . . Data • initial data • parameters / tables • . . . • costs money • can be scarce • configurations • availability • manageability
  • 52. 4/10/2014 50 © 2014 LogiGear Corporation. All Rights Reserved Dealing with data Constructed data is easier to manage − can use automation to generate it, and to enter it in the environment − result of test analysis and design, reflecting "interesting" situations − however, less "surprises": real life situations which were not foreseen Real-world data is challenging to organize − make it a project, or task, in itself − make absolutely sure to deal with privacy, security and legal aspects appropriately • study this, ask advice • apply appropriate "scrubbing" Consider using automation to select data for a test − set criteria ("need a male older than 50, married, living in Denver"), query for matching cases, and select one randomly (if possible a different one each run) − this approach will introduce variation and unexpectedness, making automated tests stronger and more interesting © 2014 LogiGear Corporation. All Rights Reserved Unattended testing... When a test cannot pass, it can be: − a difference between expected and recorded values or behavior, as a result of a check designed by the tester: this is a fail − the automation encounters a problem, like a window or control doesn't show, that is not part of a check: this is an error An error can disrupt the test flow, and you may want to catch and handle it properly: − skip smaller or larger parts of the ongoing test − bring the system back in a known state (typically: close any open windows, go to the main screen) − make sure the report clearly indicates these kind of problems, to avoid false positives − example "on error action" that executes a predefined action that will do recovery However, better is to avoid these situations − lots of efforts needed for unattended testing should raise questions about test design or quality of the automation
  • 53. 4/10/2014 51 © 2014 LogiGear Corporation. All Rights Reserved "Known bug" problem Not uncommon in large scale systems − typically related to a version of the system under test A known bug may: − generate fails you want to ignore, also in statistics − throw off automation One possible approach, a "known bug" marker, so you filter out "new" issues If many known-bug situations occur, take another look at your high level test design © 2014 LogiGear Corporation. All Rights Reserved Virtualization Virtual machines rather than physical machines − allow "guest" systems to operate on a "host" system − host can be Windows, Linux, etc, but also a specialized "hypervisor" − the hypervisor can be "hosted" or "bare metal" Main providers: − VMWare: ESX and ESXi − Microsoft: Hyper-V − Oracle/Sun: Virtual Box − Citrix: Xen (open source) Hardware support gets common now − processor, chipset, i/o − Like Intel's i7/Xeon For most testing purposes you need virtual clients, not virtual servers − most offerings in the market currently target virtual servers, particularly data centers Virtual clients will become more mainstream with the coming of VM's as part of regular operating systems − Windows 8: Hyper-V − Linux: KVM
  • 54. 4/10/2014 52 © 2014 LogiGear Corporation. All Rights Reserved Virtualization, a testers dream... In particular for functional testing Much easier to define and create needed configurations − you basically just need storage − managing this is your next challenge One stored configuration can be re-used over and over again The VM can always start "fresh", in particular with − fresh base data (either server or client) − specified state, for example to repeat a particular problematic automation situation Can take "snap shots" of situations, for analysis of problems Can use automation itself to select and start/stop suitable VM's − for example using actions for this − or letting an overnight or continuous build take care of this © 2014 LogiGear Corporation. All Rights Reserved Virtualization, bad dream? Performance, response times, capacities Virtual machine latency can add timing problems − see next slide − can be derailing in big test runs Management of images − images can be large, and difficult to store and move around • there can be many, with numbers growing combinatorial style • configuration in the VM can have an impact, like fixed/growing virtual disks − distinguish between managed configurations and sandboxes − define ownership, organize it − IT may be the one giving out (running) VM's, restricting your flexibility Managing running tests in virtual machines can take additional efforts on top of managing the VM's themselves − with the luxury of having VM's the number of executing machines can increase rapidly − one approach: let longer running tests report their progress to a central monitoring service (various tools have features for this)
  • 55. 4/10/2014 53 © 2014 LogiGear Corporation. All Rights Reserved Virtualization: "time is relative" Consider this waiting time loop, typical for a test script: − endTime = currentTime + maxWait − while not endTime, wait in 100 millisecond intervals When the physical machine overloads VM's can get slow or have drop outs, and endTime may pass not due to AUT latency − GetLocalTime will suffer from the latency − GetTickCount is probably better, but known for being unreliable on VM's Therefore tests that run smooth on physical machines, may not consistently do so on VM's. The timing problems are not easy to predict Possible approaches: − in general: be generous with maximum wait times if you can − don't put too many virtual machines on a physical box − consider a compensation algorithm, for example using both tick count and clock time © 2014 LogiGear Corporation. All Rights Reserved Virtual machines, capacity Key to pricing is number of VM's that can run in parallel on a physical machine An automated test execution will typically keep a VM more busy than human use Factors in determining VM/PM ratio: − memory, for guest OS, AUT, test tooling − storage devices (physical devices, not disk images) − processors, processor cores − specific hardware support (becoming more common) • processor, chipset, I/O We started regression with 140 VMs. Very slow performance of Citrix VM clients.
  • 56. 4/10/2014 54 © 2014 LogiGear Corporation. All Rights Reserved Building up virtualization Pay attention to pricing: − beefed up hardware can increase VM's/box ratio, but at a price − software can be expensive depending on features, that you may not need In a large organization, virtual machines are probably available − make sure to allocate timely (which can be long before you get there with your sprints) − keep in mind the capacity requirements Logical and physical management − which images, the wealth of possible images can quickly become hard to see forest through the trees − physical management of infrastructure is beyond this tutorial Minimum requirement: snapshots/images − freeware versions don't always carry this feature − allow to set up: OS, environment, AUT, tooling, but also: data, states © 2014 LogiGear Corporation. All Rights Reserved Infrastructure For large scale test execution this needs attention − physical infrastructure, but also how to use it Also consider managing infrastructure and test execution as a separate task − in or out of the team − avoid slowing down development (of system, test and/or automation)
  • 57. 4/10/2014 55 © 2014 LogiGear Corporation. All Rights Reserved Remote execution, servers Allowing execution separately from the machines the testers and automation engineers are working on increases scalability Large scale text execution, in particular with VM's, like to have: − lots of processing power, lots of cores − lots of memory Test execution tends to care less about: − storage − networking Test execution facilities tend to be a bottle neck very quickly in big testing projects − the teams can use whatever they can get First step up: give team members a second machine Second step up: use servers, users coordinate their use of them Third step up: major infrastructures with organized allocation © 2014 LogiGear Corporation. All Rights Reserved Tower Servers Smaller shops (smaller companies, departments) Affordable, simple, first step up from clients execution Not very scalable when the projects get larger
  • 58. 4/10/2014 56 © 2014 LogiGear Corporation. All Rights Reserved Rack Servers Well scalable Pricing not unlike tower servers Tend to need more mature IT expertise © 2014 LogiGear Corporation. All Rights Reserved Server Blades Big league infrastructure, high density, very scalable Tends to be pricey, use when space and energy matters Usually out of sight for you and your team
  • 59. 4/10/2014 57 © 2014 LogiGear Corporation. All Rights Reserved Cloud Cloud can be target of testing − normal tests, plus cloud specific tests • functional, load, response times − from multiple locations − moving production through data centers Cloud can be host of test execution − considerations can be economical or organizational − providers offer imaging facilities, similar to virtual machines − make sure machines are rented and returned efficiently Public cloud providers like EC2 offer API's, so your automation can automatically allocate and release them − be careful, software bugs can have costing consequences − for example, consider having a second automation process to double-check cloud machines have been released after a set time Note: public cloud is not taking of as fast as expected, cloud services, and private clouds, taking of much faster (Xinhua Photo) © 2014 LogiGear Corporation. All Rights Reserved Cloud Providers Source: Jack of All Clouds, January 2011 http://www.jackofallclouds.com/2011/01/state-of-the-cloud-january-201/
  • 60. 4/10/2014 58 © 2014 LogiGear Corporation. All Rights Reserved Cloud growth Growth of public clouds not as big as expected Cost benefits not necessarily convincing − low startup cost, but long ongoing cost See also: news.cnet.com/8301-13556_3-20063361-61.html source: IDC forecast, 2010 © 2014 LogiGear Corporation. All Rights Reserved Cloud, example pricing, hourly rates Source: Amazon EC2 (my interpretation, actual prices may vary) Linux Windows Small 0.085 0.12 1.7 GB, 1 core (32 bits) Large 0.34 0.48 7.5 GB, 4 cores Extra Large 0.68 0.96 15 GB, 8 cores High memory Extra Large 0.50 0.62 17.1 GB, 6.5 core Double Extra Large 1.00 1.24 34.2 GB, 13 cores Quadruple Extra Large 2.00 2.48 68.4 GB, 26 cores High CPU Medium 0.17 0.29 1.7 GB, 5 core (32 bits) Extra Large 0.68 1.16 7 GB, 20 cores
  • 61. 4/10/2014 59 © 2014 LogiGear Corporation. All Rights Reserved Cloud, example economy Not counting possible use of VM's within the buy option Also not counting: additional cost of ownership elements for owning or cloud (like IT management, contract and usage management) Impressions: − cloud could fit well for bursty testing needs, which is often the case − for full continuous, or very frequent, testing: consider buying − hybrid models may fit many big-testing situations: own a base capacity, rent more during peak use periods small large extra Windows $0.12 $0.48 $0.96 buy (estimate) $300 $650 $900 hours to break even 2,500 1,354 938 months (24 / 7) 3.4 1.8 1.3 © 2014 LogiGear Corporation. All Rights Reserved Data centers can go down However, disruption could have been minimized by using multiple data centers
  • 62. 4/10/2014 60 © 2014 LogiGear Corporation. All Rights Reserved Data centers can go down This time, it did involve multiple data centers . . . © 2014 LogiGear Corporation. All Rights Reserved Data centers can go down Service providers can occasionally go down too
  • 63. 4/10/2014 61 © 2014 LogiGear Corporation. All Rights Reserved Cloud, usage for special testing needs Multi-region testing − Amazon for example has several regions • US East, Northern Virginia • US West, Oregon, Northern California • EU, Ireland • Asia Pacific, Singapore, Tokyo • South America, Sao Paulo − be careful that data transfers between regions costs money (0.01/GB) Load generation − example: "JMeter In The Cloud" • based on the JMeter load test tool • uses Amazon AMI's for the slave machines • allows to distribute the AMI's in the different regions of Amazon • see more here: aws.amazon.com/amis/jmeter-in-the-cloud-a-cloud-based-load-testing-environment © 2014 LogiGear Corporation. All Rights Reserved Questions for Infrastructure What kind of infrastructure does your organization use for testing? What is the role of virtualization, now or in the future? Are you using a private or a public cloud for testing?
  • 64. 4/10/2014 62 © 2014 LogiGear Corporation. All Rights Reserved Example of a cloud system under test source: Windows Azure reference platform © 2014 LogiGear Corporation. All Rights Reserved Approaches Automation does not have to be black box − for very big systems, a separate black box automation effort may not be efficient − and building and keeping lab situations might be cumbersome − some simple hooks can greatly help already − remember... this is about automation, not test design. Make testability part of requirements and architecture − a key question should not just be "how do I design this", but "how do I test this" (test design, automation) − some cloud/web systems are changed frequently, and tested "live" • "Testing in Production (TiP)" − allow redirection of some or all traffic through another version of a component or layer Example: reverse proxy's enabling A/B testing see also: Ken Johnston's chapter in the book of Dorothy Graham and Mark Fewster, and his keynote at StarWest 2012
  • 65. 4/10/2014 63 © 2014 LogiGear Corporation. All Rights Reserved A/B testing with a reverse proxy Watch your test design, easy to drown in technical solutions only B could be a real-life user or also a keyword driven test machine A/B testing means part of traffic is routed through a different server or component (see if it works, and/or how users react) A similar strategy could be done at any component level A A B Reverse Proxy Users Servers A B newcurrent A B © 2014 LogiGear Corporation. All Rights Reserved Organization Much of the success is gained or lost in how you organize the process − part of the teams − who does test design − who does automation − what to outsource, what to keep in-house Write a plan of approach for the test development and automation − scope, assumptions, risks, planning − methods, best practices − tools, technologies, architecture − stake holders, including roles and processes for input and approvals − team − . . . Assemble the right resources − testers, lead testers − automation engineer(s) − managers, ambassadors, ... Test design is a skill . . . Automation is a skill . . . Management is a skill . . . . . . and those skills are different . . .
  • 66. 4/10/2014 64 © 2014 LogiGear Corporation. All Rights Reserved Industrial Organization Large scale testing can move from a "design" to a "production" focus − mostly applies to test execution, but also seen for test development − this not black and white, both paradigms can occur in the same projects − this is often more easy to outsource than development A production organization is different a development organization − this is not unique for software − different professional culture − emphasis more on delivery and scale, "thinking big" − discipline rather than creativity, "get stuff done" − activities are like planning, control, logistics, information © 2014 LogiGear Corporation. All Rights Reserved Task in "production" (test execution) Keeping the tests running Allocating resources Respond to hick-ups Analyze and address automation issues Address fails or other testing outcomes − including dealing with "known bugs" − part of a bigger team
  • 67. 4/10/2014 65 © 2014 LogiGear Corporation. All Rights Reserved Stake Holders Test Development Test Automation Technology/ Infrastructure ProductionMarketing/ Sales System Development End User Departments Quality Assurance Management After Sales/ Help Desk Customers Vendors Government Agencies Publicity EXTERNAL INTERNAL © 2014 LogiGear Corporation. All Rights Reserved Team roles, examples Test development Automation Planning and managing the test runs Managing environments Managing infrastructure Dealing with stakeholders Analysis of results, and follow up Reporting
  • 68. 4/10/2014 66 © 2014 LogiGear Corporation. All Rights Reserved Test Development and Automation in sprints Test Module Definition (optional) Test Module Development Interface Definition Action Automation Test Execution Sprint Products Product Backlog Test re-use Automation re-use product owner team prod owner & team User stories Documentation Domain understanding Acceptance Criteria PO Questions Situations Relations Agile life cycle Test development Main Level Test Modules Interaction Test Modules Cross over Test Modules © 2014 LogiGear Corporation. All Rights Reserved Test automation in sprints Try keep the main test modules at a similar level as the user stories and acceptance criteria Aim for "sprint + zero", meaning: try to get test development and automation "done" in the same sprint, not the next one − next one means work clutters up, part of team is not working on the same sprint, work is done double (manually and automated), ... Make sure you can do the interface mapping by hand (using developer provided identifications) − can do earlier, before UI is finalized, and − recording of actions (not tests) will go better Also plan for additional test modules: − low-level testing of the interaction with the system under test (like UI's) − crossing over to other parts of the system under test There should be agreement on the method(s) for testing and automation The team should include the skills and experienced needed for automated testing and the approach(es) taken for it
  • 69. 4/10/2014 67 © 2014 LogiGear Corporation. All Rights Reserved Fitting in sprints Agree on the approach: − questions like does "done" include tests developed and automated? − do we see testing and automation as distinguishable tasks and skillsets − is testability a requirement for the software Create good starting conditions for a sprint: − automation technology available (like hooks, calling functions, etc) − how to deal with data and environments − understanding of subject matter, testing, automation, etc Make testing and automation part of the evaluations Address tests and automation also in hardening sprints Just like for development, use discussions with the team and product owners to deepen understanding: − also to help identify negative, alternate and unexpected situations © 2014 LogiGear Corporation. All Rights Reserved Testing as a profession "Do thorough acceptance testing, but not only by the customer" − source: "Agile Software Testing in a Large-Scale Project", Israeli Air Force Focus on tests, not development: − what can be consequences of situations and events − relieve developers Knowledge and experience with testing techniques and principles The challenge for the tester in the new era is to become a more credible professional tester, − not a pseudo programmer − part of the team Forcing a nontechnical tester to become a programmer may lose a good tester and gain a poor programmer
  • 70. 4/10/2014 68 © 2014 LogiGear Corporation. All Rights Reserved Automation is a profession too Overlaps with regular system development, but not same Less concerned with complex code structures or algorithms More concerned with navigating through other software efficiently, dealing with control classes, obtaining information, timing, etc − if you would compare developers to "creators", automation engineers might be likened to "adventurers"... The automation engineering role can also be a consultant: − for test developers: help express tests efficiently − for system developers: how to make a system more automation friendly − important player in innovation in the automated testing © 2014 LogiGear Corporation. All Rights Reserved Questions for Organization How is your testing currently organized (who is doing what)? − test design − test development − automation − execution − assessment of release readiness Do you use agile? If yes, is there a role for a test professional? And for an automation professional?
  • 71. 4/10/2014 69 © 2014 LogiGear Corporation. All Rights Reserved Reporting Aim at needs: − avoid lengthy automated reports, have bottom line numbers − reports for stake holders − reporting for the team Reporting for a big testing project is about: − test and automation progress − production (running the tests) − results (aimed at system under test) Teams need (relevant) details − what happened, reproducibility, ... − either the tests, the automation, or the system under test − overall situations, with an ability to "drill down" to problem areas Management needs: − status, expectations, issues (realistic! bad news matter, you get punished for not telling) − bottom lines, plan versus reality confrontation − dates, efforts, used resources, costs, run times, ... − never allow planned numbers or dates to be "updated" Also for reporting, test organization is a key driver © 2014 LogiGear Corporation. All Rights Reserved War rooms Helpful if response times are critical, and a need for cooperation, towards the same goal − similar grounds as for agile scrum rooms Set up at critical times, like before important deadlines, or during critical releases Can temporarily bring together multiple parties, that normally are not co-workers − like competitor vendors Pay attention to physical conditions − machines, monitors, white boards, meeting places, headsets, ... − food, drinks, ... The test execution cycle should match the needs of the war room approach − fast turnarounds − effortless − completeness − selective or integral See also: "Your Game is Live, Now What?", Jane Fraser, Electronic Arts
  • 72. 4/10/2014 70 © 2014 LogiGear Corporation. All Rights Reserved Globalization.... © 2014 LogiGear Corporation. All Rights Reserved Main Challenges Other countries Distances Time differences
  • 73. 4/10/2014 71 © 2014 LogiGear Corporation. All Rights Reserved Globalization Three Challenges: − another countries, other cultures − geographic distances − time differences Seven "Patterns": − "Solution" − "Push Back" − "Time Pressure" − "Surprises" − "Ownership" − "Mythical Man Month" − "Cooperation" © 2014 LogiGear Corporation. All Rights Reserved Challenge: Other Country
  • 74. 4/10/2014 72 © 2014 LogiGear Corporation. All Rights Reserved Other Country Differences in culture − more on the next slide... Different languages, and accents Differences in education − style, orientation and contents − position of critical thinking, factual knowledge, practice, theory,... − US, British, French, Asian, ... Differences in circumstances − demographics − economy, infrastructure − politics Apprehension on-shore and off-shore about job security doesn't help in projects − management responsibility: understand your strategic intentions, and their consequences, and clarify them − be realistic in cost and benefit expectations © 2014 LogiGear Corporation. All Rights Reserved More on Culture... Regional culture. There are numerous factors: − very difficult to make general statements • many anecdotes, stories and perceptions, some are very helpful, some have limited general value • not sure on impact of regional culture (see also [Al-Ani]) − numerous factors, like history, religion, political system • e.g. valuing of: critical thinking, theory, bottom-line, relations, status, work-ethic, bad news, saying 'no' • entertaining guests, eating habits, alcohol, meat, humor, etc • position of leaders, position of women managers • mistakes can be benign and funny, but also damaging, visibly or hidden, in particular perceived disrespect hurts Organizational culture − can be different from country to country, sector to sector, company to company, group to group − I feel this to be at least as strong than regional culture (see for example [Al-Ani]) − you can have at least some control over this Professional cultures − for example engineers, QA, managers, ... Some ideas to help: − get to know each other (it helps, see for example [Gotel]) − study the matter, and make adaptations
  • 75. 4/10/2014 73 © 2014 LogiGear Corporation. All Rights Reserved © 2014 LogiGear Corporation. All Rights Reserved
  • 76. 4/10/2014 74 © 2014 LogiGear Corporation. All Rights Reserved © 2014 LogiGear Corporation. All Rights Reserved
  • 77. 4/10/2014 75 © 2014 LogiGear Corporation. All Rights Reserved Challenge: Distance © 2014 LogiGear Corporation. All Rights Reserved Distance Continuous logistical challenges Lots of costs, and disruptions, for traveling Distance creates distrust and conflict − could be "normal" behavior, inherent to humans Complex coordination can create misunderstandings − on technical topics − on actions, priorities, and intentions
  • 78. 4/10/2014 76 © 2014 LogiGear Corporation. All Rights Reserved Challenge: Time difference © 2014 LogiGear Corporation. All Rights Reserved Challenge: Time difference Additional complication for communication and coordination Places a major burden on both on-shore and off-shore staff − having to work evenings and/or early mornings − potential for exhaustion, lack of relaxation, mistakes, irritation Can easily lead to loss of time at critical moments Some solutions: − manage this actively − constantly seek to optimize task and responsibility allocation − build the on-shore and off-shore organizations to match − seek ways to save meeting time, like optimal information handling
  • 79. 4/10/2014 77 © 2014 LogiGear Corporation. All Rights Reserved Effect of time difference Test Module: “Segment Y, Default Settings” Windows Linux TestArchitect 5 ~ 4:16 m ~ 4:28 m TestArchitect 6 ~ 11:00 m ~ 8:00 m Report from the team to the US management . . . Performance comparison TestArchitect 5 and 6 © 2014 LogiGear Corporation. All Rights Reserved Patterns Experiences seem to follow patterns − at least our own experiences do − variations are numerous, but seem to follow similar lines − following are examples, not limitative It can help to recognize patterns quickly, and act upon them Resolutions have side-effects, can introduce new issues − for example strengthening local management means less direct contact with the project members doing the work Just about every pattern occurs in every direction − from your perspective regarding "them" − their perspective on you, or each other − sometimes equaling, sometimes mirroring
  • 80. 4/10/2014 78 © 2014 LogiGear Corporation. All Rights Reserved Pattern: "The Solution" Typical sequence of events: − the team finds a problem in running a test − the team discusses it and comes up with a "solution" − the solution: (1) creates issues, and (2) hides the real problem Better way: − define as an issue − discuss with project manager and customer © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Push Back" US side, or customer, gives bad direction Team doesn't like it, but feels obliged to follow orders The result is disappointing Team is blamed − and will speak up even less next time Better way: − discuss with the principal/customer at multiple levels • strategic about direction, operational day-to-day − empower and encourage the team to speak up − write plans of approach, and reports
  • 81. 4/10/2014 79 © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Time Pressure" Deadline must be met − no matter what − use over-time − "failure is not an option" Deadlines are sometimes real, sometimes not − become a routine on the US side − easy to pressure over the email − very difficult for a non-empowered team to push back − risk: inflation of urgency Better way: − good planning − proper weighing of deadlines and priorities − frequent reporting − local management © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Surprises" Good news travels better than bad news... − should be the other way around − the "cover up": "let's fix, no need to tell...." − over time: needing bigger cover ups to conceal smaller ones − not unique for off-shoring, but more difficult to detect and deal with Once a surprise happens: − you will feel frustrated, and betrayed − fix the problems, point out the consequences of hiding, avoid screaming and flaming Better ways: − agree: NO SURPRISES!! − emphasize again and again − train against this − continuously manage, point out − the magic word: transparency SUPRISES
  • 82. 4/10/2014 80 © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Ownership" Shared responsibility is no responsibility Effort-based versus result-based On-shore players feel the off-shore team has a result responsibility Off-shore team members feel an effort-based responsibility ("work hard") Better way: − clear responsibilities and expectations − on-shore ownership for quality control of system under test • and therefore the tests − off-shore ownership of producing good tests and good automation − empower according to ownership © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Mythical Man Month" Fred Brooks classic book, "Mythical man month": − "Assigning more programmers to a project running behind schedule will make it even later" − "The bearing of a child takes nine months, no matter how many women are assigned" In test automation, there must be clear ownership of: − test design (not just cranking out test cases) − automation, this is different skill and interest Assign at least the following roles: − project lead, owns quality and schedule − test lead: owns test design, coaches and coordinates the other testers − automation: make the actions work (assuming ABT, not the test cases) Define distinct career paths in: testing, automation, management
  • 83. 4/10/2014 81 © 2014 LogiGear Corporation. All Rights Reserved Pattern: "Cooperation" Communication is tedious, takes a long time Questions, questions, questions, ... − reverse: questions don't get answered For at least one side in private time, extra annoying Misunderstandings, confusion, actions not followed up − double check apparent "crazy things" with the team before jumping to conclusions, and actions (assume the other side is not "nuts" or "dumb"...) Please understand: distance fosters conflicts − we're born that way, be ready for it Better ways: − prioritize training, coaching, preparation and planning. Saves a lot of questions... − write stuff down, use briefs, minutes − define workflows and information flows • buckets, reporting, select and use good tools − specialize meetings • table things for in-depth meetings • ask to meet internally first − be quick, no more than 30 mins © 2014 LogiGear Corporation. All Rights Reserved Cooperation