1. MAL12: QAT and Automated
Testing of Modern Apps
Andy Tinkham
Principal Lead Consultant, QAT
Level: Intermediate
2. Personal Information
• http://www.testerthoughts.com
• http://www.twitter.com/andytinkham
• andyt@magenic.com
• Office hours:
http://ohours.org/andytinkham
• Slides available at
http://slideshare.net/andytinkham
3. Previously, on Modern Apps Live…
Quality is team’s job, not just
testers’
Team needs info to make decisions
Testing activities provide information
Assign testing activities to people
best able to provide that information
4. Yesterday, we talked about the types of
information that developers can provide.
Today, we’ll talk about the information the
testers can best provide.
5. Tester-provided Information
• How it can work • What we’ve done & • Why we did the
Story of the Product
Story of the Testing Quality
Story of the Testing
seen testing we did
• How it fails
• Where we haven’t • Why this is (or isn’t)
• How it might fail in gone good enough
ways that matter to
our clients • Where we won’t be • What we need to
going get more
information
• What problems did
we find? To whom? • Risks and costs
Why?
• Impediments to
testing
Hat tip to Michael Bolton, DevelopSense
http://www.developsense.com/blog/2012/02/braiding-the-stories/
7. Risk
• Risks have different levels of importance
– Potential impact if problem occurs
– Likelihood of problem occurrence
• Tasks address different amounts of risk
• Schedules slip – we might not get
to do all the testing we want
• Use risk to prioritize testing efforts
- risk-based testing!
8. Risk-based testing
Test for the
biggest risks first
Use risks as inputs
to test design
9. Identifying risk
Technical Business
• Areas where path • Key functionality
to develop unclear • Differentiators used
• Explicit by sales
assumptions • Areas where
• Foundational problems have
architecture pieces occurred
• Areas where historically
problems have
occurred
historically
10. MyVote Example Risks
• Schedule risks
• Unable to create a poll
• Can’t access previously created polls
• Problems when authentication service is
down
• Analysis looks at wrong set of answers
11. Translating risk to test cases
• After prioritizing risks, begin by asking
“How can I tell if this problem occurs?”
• Explore wording to see if additional
meanings appear
– Analysis looks at WRONG set of answers
Subset
Superset of right answers plus extras
No overlap with right answers
12. Running Tests
• Pick tests
– Importance of risks they address
– Value of information running a test will provide
• As you progress, feedback learning into
process
– Reprioritize risks
– Revalue information
– Create new tests
– Skip tests
13. How can devs help the test team?
• Communicate risks that you see to testers
– Where are you less sure about how to develop
functionality?
– What questions did you have while you were
designing & developing?
– What assumptions did you make while coding?
• Review risks identified
– More or less serious impacts?
– Different likelihoods?
– Missing risks?
14. Risk-based testing in MyVote
• Having risks identified meant that we
could deal with time crunch
• Not everything got tested – but we used
the time we did have as effectively as we
could
• Were able to treat some risks as covered
by dev testing and focus more on other
risks
15. Risks give us a lot of insight into
what could go wrong, but how
do we address the things we
can’t predict?
16. Session-based Exploratory
Testing
• Time-boxed testing on a focused topic
• Not following pre-designed test cases
• Learning from previous tests guides next
steps
17. Session-Based Approach
• Use a single focus as a charter
– Check all the menus
– Investigate the order sorting functionality
• Setup an uninterrupted time box (generally
1-2 hours)
• Work through test ideas,
continually integrating your
learning as you design new tests
18. During the session
• Keep a record of what you do
– Written notes
– VS 2012 exploratory testing session recording
– Rapid Reporter (http://testing.gershon.info/reporter/)
• Keep lists
– Bugs found
– Additional charters to cover/things to
investigate
– Additional test ideas for this charter
19. After the session
• Debrief session for knowledge
dissemination
– [Past] What happened during the session?
– [Results] What was achieved during the session?
– [Obstacles] What got in the way of good testing?
– [Outlook] What still needs to be done?
– [Feelings] How does the tester feel about all this?
20. Planned Executing Tests in
MyVote
• Each feature had a work item per platform
• Assigned work items to testers
• Each work item becomes charter
• Debriefed after testing to share
information & review for additional
session needs
21. Reality for MyVote
• Scaled back to just a small number of
sessions due to time constraints
• Each session focused on the app on a
given platform
• No debriefing
• Results: Not as strong a testing effort as
hoped
22. Automated Testing
• Any test can be more or less automated –
it doesn’t have to be fully automated
• The key is to approach automation from a
task perspective, not a test perspective
• Possible tasks
– 1 or more tests
– 1 or more test steps
– 1 or more supporting tasks
23. Choosing tasks to automate
• Choose tasks that take advantage of
computer’s strengths rather than just
automating existing human-focused tests
• Plan automation in conjunction with rest of
test planning
• Look for access points below UI
24. Potential pitfalls
• Automation takes time
– Creation time
– Maintenance time
• Easy to shift focus from automation
adding value & providing useful
information to cool to automate
25. How devs can help with
automation
• Share unit testing infrastructure and
architecture components
• Code reviews of automation code
• Pair with testers
26. Non-Functional Testing
• Not all important test cases focus on
functionality
• When identifying risks, think about things
like impacts of slow performance, lack of
usability, and lack of security
• Don’t leave non-functional testing until the
end – build in monitoring & tests from
start
27. Tester-provided Information
• How it can work • What we’ve done & • Why we did the
Story of the Product
Story of the Testing Quality
Story of the Testing
seen testing we did
• How it fails
• Where we haven’t • Why this is (or isn’t)
• How it might fail in gone good enough
ways that matter to
our clients • Where we won’t be • What we need to
going get more
information
• What problems did
we find? To whom? • Risks and costs
Why?
• Impediments to
testing
Hat tip to Michael Bolton, DevelopSense
http://www.developsense.com/blog/2012/02/braiding-the-stories/
29. URLs from the discussion after
the talk
• http://testingeducation.org – free video
lectures & slides on software testing for a
college-level black-box software testing
course
• http://testobsessed.com/wp-
content/uploads/2011/04/testheuristicsche
atsheetv1.pdf - Elisabeth Hendrickson’s
Test Heuristics Cheat Sheet with lots of
test ideas