2. The Immense Value of
Automated Tests
and
How to Avoid Writing Them
Alternate title
Brian Okken
3. Brian Okken
weekly Python podcasts A book
I work here
new meetup there, Python PDX West Oct 8, 6 pm
Python-PDX-West
4. Outline
• A development workflow
• which includes a build pipeline
• which includes tests
• that I don’t want to spend too much time writing
• so most of my test cases use parametrization*
*one of many techniques I use to avoid writing tests
5. Target Workflow
Time
main
dev / fix / feature
branches
Merge Request /
Pipeline Runs
Drawing elements: Vincent Driessen
License: Creative Commons
Merge Request /
Pipeline Runs
• Branch off main
• Solo or collaborating
• Tests and code merge together
• Pipeline does magic
6. Developer Workflow
• Write some code & some tests.
• Commit code regularly
• Merge Request / Pull Request
• Pipeline does most of the work, build, test, etc.
• Reviewers get notified.
• Reviewers think my code is awesome & accept it.
• Merge finishes
• Fist bumps, high fives, etc.
• Repeat
7. After merge, I know
• I didn't break anything that used to work.
• New features are tested with new tests.
• Future changes won’t break current features.
• Team understands code and tests.
• The code is ready for users.
• I can refactor my code if I'm not proud of it and
know the tests will make sure everything is ok.
8. Reviewer knows,
before the review
✓ Static analysis
✓ Style guide checks
✓ Code coverage has not dropped.
✓ Tests all pass
✓ Legacy functionality working.
✓ New tests pass.
9. Reviewer Focus
• Just the code + test for this feature.
• Do I understand the code and the tests?
• Enough to maintain it if the original dev is on vacation?
• Are the tests sufficient for the new functionality?
10. Team Lead / Manager View
• Awesome code keeps popping out
• The tests have our backs.
• We’re moving fast.
• Big refactoring/rewrites are low risk.
• I can understand the tests.
• Maybe even write some tests myself.
11. Tests in a Pipeline
• fail fast
• negative feedback as fast as possible
• feeds into a deploy stage, maybe
smoke
tests
longer
running
tests
quick but
thorough
new tests
static
analysis
13
12. Tests to support this
• Customer focused
• Developer focused
• Feature / functionality focused
• Risk focused
• Complete but not crazy complete
• Have to be readable, fast to write, easy to maintain
13. Parametrization
• Many test cases with one test function.
• pytest has a few strategies for this.
• function parametrization
• fixture parametrization
• a hook function: pytest_generate_tests()
17
14. cards
$ cards
ID owner done summary
---- ------- ------ ————
$ cards add prepare for talk
$ cards add give talk
$ cards
ID owner done summary
---- ------- ------ ----------------
1 prepare for talk
2 give talk
$ cards update -o okken 1
$ cards update -o okken 2
$ cards finish 1
$ cards
ID owner done summary
---- ------- ------ ----------------
1 okken x prepare for talk
2 okken give talk
15. a test
import cards
from cards import Card
def test_add(tmp_path):
cards.set_db_path(tmp_path)
cards.connect()
a_card = Card('first task', 'brian', False)
id = cards.add_card(a_card)
c2 = cards.get_card(id)
cards.disconnect()
assert a_card == c2
17. so we can focus on this test
def test_add(empty_db):
a_card = Card('first task', 'brian', False)
id = cards.add_card(a_card)
c2 = cards.get_card(id)
assert a_card == c2
18. so we can focus on this test
def test_add(empty_db):
a_card = Card('first task', 'brian', False)
id = cards.add_card(a_card)
c2 = cards.get_card(id)
assert a_card == c2
@dataclass
class Card:
summary: str = None
owner: str = None
done: bool = None
id: int = field(default=None, compare=False)
But what about all
the other kinds of cards?