This document discusses implementing automated testing for continuous delivery. It recommends building a testing pyramid with unit tests at the bottom and UI tests at the top. Unit tests should be quick to run and numerous, while UI tests should be fewer and cover main areas. Tests should be automated collaboratively by testers and developers. Challenges include flaky tests, maintaining tests, and adding tests to legacy codebases. The document also provides guidance on test data practices and avoiding dependencies between tests.
6. Software Testing Pyramid
Implementing
Automated Testing
Unit tests
Service tests
(API,
Integration,
Component)
http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
End-to-end
Business facing
Localized
Technology facing
UI tests
Manu
al
Tests
7. Software Testing Pyramid
Implementing
Automated Testing
Tests at the bottom of the pyramid focus on
smaller sections of code, e.g. unit tests.
These tests are the foundation of a good
test automation strategy, they are quick to
run and there should be many of them.
They run at the earlier stages of the
pipeline.
Unit tests
8. Service
Tests
Tests in the middle of the pyramid cover
larger aggregation of code - components,
services, etc.
Service tests provide many advantages of
end-to-end tests while avoiding UI
complexities.
They run only after the build has passed unit
level tests.
Implementing
Automated Testing
Software Testing Pyramid
9. Tests at the top cover the "full stack” and are
the slowest to run.
Don’t write a test for every acceptance
criteria (antipattern), instead use a few
journeys to cover main areas of the code.
They run only after the build has passed both
the unit level and service level tests.
UI
tests
Implementing
Automated Testing
Software Testing Pyramid
10. Testers and developers should collaborate to write, run and
maintain tests.
Siloed testing where development hands over tests to QA not
only creates long feedback loops, but also leads to testers
duplicating automated tests with manual tests.
Expensive automated testing tools tend to make the feedback
loop worse. Developers should be able to run all tests,
including performance tests, to help them reproduce and
diagnose any issue reported by QA.
Implementing
Automated Testing
Working Practices
12. Anti-Pattern: Ice-cream cone
x Avoid inverting your test pyramid
x Testing like this through the UI is slow and leads to brittle tests.
x Avoid using only a UI-oriented testing tool, as that focuses effort
on writing UI-level automated tests.
x If a bug is found by users, manual testing or high level testing,
push a test to catch that lower down the pyramid.
x The only tests at a given level should be to test something that
can't be caught at a lower level, i.e. when testing multiple
components together, your tests should only check component
integration, not each component. That should have be done by
lower-level tests.
Implementing
Automated Testing
13. Flaky tests
Tests take too much work to maintain
Too much effort to add tests for legacy codebases
ChallengesImplementing
Automated Testing
15. Types of test data
Test-specific data: This is the data that drives the
behaviour under test. It represents the specifics of the
case under test.
Test reference data: Data that needs to be there but
actually has little bearing upon the behaviour under
test.
Application reference data: Irrelevant to the behaviour
under test, but needs to be there to allow the
application to start up.
Test Data
16. Rather than using database dumps, use the
application’s API to set up the correct state.
Don’t use production data.
Avoid dependencies between tests.
Best practicesTest Data
18. LEARN MORE
Deploy a great product faster.
Agile teams deliver working software early and
often.
Go automates and streamlines the build-test-
release cycle for worry-free, continuous delivery
of your product.
Share this ebook.
Visit our Continuous Delivery Channel for more
posts like this.