In 2015, I ran a workshop at TestBash Brighton entitled "Supercharging Your Bug Reports" (also available in webinar format on the Ministry of Testing Dojo). In that class, I outlined some of the key components of an effective bug report, with the goal of helping testers to become better advocates for problems in their workplace. Fast-forward to 2017, and my world (and beliefs) have changed. I've been working in some fast-paced environments with highly-productive teams, where life moves at a pace which is largely incompatible with "traditional" bug reporting processes.
How do you effectively act as a champion for quality when one of your most visible forms of communication is taken away? In this talk, I'll share stories of my transition: how I had to adapt, the approaches that I chose to take, and evaluate how my 2017 outlook on bug reporting compares to my previous vision.
16. BUGS…
• Difference between
perceived and desired
• Something that bugs
somebody who matters
• A threat to value
• (Problems, defects, issues)
• A snapshot of information
• A placeholder for conversation
• Communicate within team
• Surface info to stakeholders
• To guide decisions
BUG REPORTS…
If a bug isn’t written in a bug report… it’s still a bug!
19. ZERO-TOLERANCE BUG POLICY
• YAGNI – If not now, then not later either!
• Deal with mess as it arises
• Avoids “bug tennis”
• Weekly code-freeze for release: tackle what little remains
• BUT: Hard if external parties (clients, support) want involvement
22. THE TOOLS TO FIX IT YOURSELF
• Domain knowledge
• Understanding of codebase/standards
• Know your personal limitations
• Know what your team is comfortable with
• Show the fix to the developer (keep the feedback cycle)
++positivity;
++respect;
++gratitude;
--blame_game;
++credibility;
28. GET INVOLVED EARLIER
• Get the team thinking about testing (and testability) early
• Reduce feedback/cycle time
• Dev and test, complementing each other, working in harmony
29.
30. As close to the code as
you can get
Mob Programming
(Woody Zuill)
Shared expertise
Better understanding
of what’s been tested
32. PRACTICAL AUTOMATION
• Doesn’t have to be a fully autonomous solution!
• Small scripts/tool/helpers
• Automate the pain points of daily life
• Lower cost (only needs to be as good as the job it’s serving)
• Recognise opportunities to practice
• Seek forgiveness, not permission
38. LOGGING AND MONITORING
• “Free testing”!
• Not just useful for support/ops
• Rich supply of information
• Make it queryable
• Some teams do all of their testing this way
• Reliance depends on risk appetite
42. METRICS
• Bug counts are a lousy metric
• Get qualitative information
• Interview team members
• What’s going well, what’s going badly
• Ask your testers what challenges they face
• “But that takes time and effort…” - Good!
• “But that’s difficult to scale…” – Don’t over-manage!
• Hire great people. Trust them. Support them.
44. A MANIFESTO FOR CONTINUOUS QUALITY
• Encourage easier communication
• Highlight problems
• Reveal value (and threats to value)
• Get involved earlier
• Arm yourself with tools/knowledge to work alongside developers
• Make bold and courageous choices
• Deliver quality software quicker
(a work-in-progress!)
Announced yet or not?
Feedback please! (Both good and bad)
In the beginning
Early part of career
Scheming: Keeping test ideas to myself, to “catch out” developers.
Perpetuating the divide
Oracles/heuristics
Talking about testing
Generating test ideas
= testing is a skill which can be learned, developed
the James Bach RST Online comment, persuading me to focus on my advocacy skills
persuaded me to focus on my advocacy skills, the workshop/dojo (still the best theory material i've built).
When getting workshop material reviewed, at least one reviewer mentioned a more lightweight approach - I did have one slide ("you may not need that bug") but acknowledged it wasn't where I was at right then.
When getting workshop material reviewed, at least one reviewer mentioned a more lightweight approach - I did have one slide ("you may not need that bug") but acknowledged it wasn't where I was at right then.
Since then - Amido, CTM, ZPG.
Three strong agile/kanban shops.
Some of the challenges, eg hard deadlines (marketing campaigns & regulatory requirements; one week sprints) mean that traditional, verbose bug reports are outdated (list why).
Something that I'm good at being taken away!
Mental challenge to adjust.
[Oracle] Holding onto things you’re good at, or avoiding things you don’t know, holding back the team
More pragmatic approach to quality
Continuous testing?
Continuous breathing?
Cloud / mind map?
WHAT IS A BUG REPORT? A snapshot of information, to communicate with team/stakeholders, to guide decisions. A lot of that is possible without writing stuff down.
A pragmatic approach to quality
Example from Supercharging, the dense bug (with fix suggestion) that nobody read.
TITLE SLIDE: YAGNI / ZERO BUGS
YAGNI - if it's not important enough to fix now, will it really be later? Current team has < 30 bugs in JIRA (rest is stories/tasks) because we don't tend to carry forward, and if we finish sprints early (or Thu PM code freeze) then devs are pushed towards the bug backlog. (Different if there are clients involved, especially if they have final say over defect prioritisation - CBRE)
No time to file a bug report!
In a tough situation – “good enough” is enough
Chris Kenst http://www.kenst.com/2017/04/testers-dont-be-afraid-to-make-production-changes/
SUBMITTING MY OWN PULL REQUESTS - finding a small bug in a story I'm testing - if I can identify root cause, AND fix it in code, AND test my fix as well as I would somebody else's, that reduces the admin time in bug reports.
The ultimate TDD – Tester Driven Development!
When I left CTM – “I didn’t realise you were a tester”
If I add a feature myself – that’s fine but (JB) then I am adopting a developer role.
Get involved earlier
PARTICIPATING IN CODE REVIEWS. I can read better than I can write! It doesn't hurt to be able to pick up some of the lingo. Example where we changed a SQL query and I identified that the new query would break existing functionality, without having to run it to prove it. Now I'm a formal part of the CR process!
AND – it shouldn’t be my job to spot it, why wasn’t there a unit test to catch this
plus - as someone who's not working with the code as much as developers, if I can't grasp what the code's doing, a newcomer to the team might struggle too.
Date/time knowledge (“this is simple” – “Hold my beer”)
Plenty of room for a tester in this scenario
scripted tools don't have to be a fully autonomous solution, but they can automate the pain points of our day-to-day jobs, making it easier to deliver valuable information which humans can use to make decisions.
http://techjobs.comparethemarket.com/blog/automation-testing-pragmatic-approach
Can’t fully automate (differences are not necessarily problems)
Automate the time-consuming part (comparing two documents, highlighting differences)
Leaves the human free to focus on the part which needs greater thought (unstructured analysis)
Learned about:
Slack bots and how they communicate
Creating and managing a Heroku instance
The value of writing your own logging
https://xkcd.com/1205/
Gives good ballpark figures
5 years might be a bit long for a typical company’s ROI
Doesn’t include the time to support the solution
Logging/monitoring = free testing
Another team broke our contract
We spotted it before they did
Helped them write the tests to stop it happening again
Sending requests for price comparisons to dozens of insurers
Log when we get an error back, automated alert if error rate is high
We tell them about problems in their own systems before they’ve spotted them – building credibility
Metrics/numbers?
Thinking back to the bug nobody read - what would I do differently now? Discuss with developers (how easy/viable is the suggested fix). Discuss with stakeholders (do they care). At that point, either fix it or write a one-liner (ok two lines, remind reader why it matters).
With 1wk sprints, as long as nothing has a major knock-on for users, it can be fixed quickly with little impact
..caveats?
Conclusion
Bring back round to continuous quality