This document discusses test-driven development (TDD), a software development technique where test cases are written before implementation code. TDD involves writing a failing test case, then code to pass the test, and refactoring code as needed. Key principles are writing tests first, running tests frequently, and making code changes in small iterative steps. TDD aims to increase code quality and reduce bugs by fully testing code in short cycles.
2. OUTLINE
1. What
Introduction
What is TDD
2. Why
Motivation
Why TDD
3. How
Stages
Colors
Principles
Methodology
Best practices
Tools
4. Summary
Advantages
Disadvantages
Conclusions
5. References
3. INTRODUCTION
• Test-Driven Development (TDD) is a software development technique
that involves repeatedly first writing a test case and then implementing
only the code necessary to pass the test
• The technique began to receive publicity in the early 2000s as an aspect
of Extreme Programming
• A method of designing software, not merely a method of testing
4. WHAT IS TDD?
TDD is a technique whereby you write your test cases before you write any implementation code
Forces developers to think in terms of implementer and user
Tests drive or dictate the code that is developed
“Do the simplest thing that could possibly work”
Developers have less choice in what they write
An indication of “intent”
Tests provide a specification of “what” a piece of code actually does – it goes some way to defining an
interface
Some might argue that “tests are part of the documentation”
Could your customers/clients write tests?
5. WHY-MOTIVATION
If you intend to test after you‘ve developed the system, you won‘t have the time for testing.
->Write the tests before the code!
If things get complicated, you might fear that “the system“ doesn‘t work.
->Execute the tests and get positive feedback (everything still works) or get pointed to the
bit that does not / no longer work.
If you‘re overwhelmed by the complexity, you get frustrated.
->Start with the simplest thing and proceed in tiny steps!
If you don‘t have tests for the code, you shouldn‘t use it / ship it.
->This can‘t happen if you write the test first (so you reach better test coverage than with
functional tests).
If performance is only considered late, you won‘t be able to just “add a little more
performance“ to the system.
->Re-use unit tests for performance tests even during development and don‘t start with
performance tests late in the project!
6. WHY TDD?
• “If you don’t have tests, how do you know your code is doing the
thing right and doing the right thing?
• Many projects fail because they lack a good testing methodology.
• It’s common sense, but it isn’t common practice.
• The sense of continuous reliability and success gives you a feeling
of confidence in your code, which makes programming more fun.
8. HOW - STAGES
1. Add a test
2. Run all tests and see if the new one fails
3. Write some code
4. Run tests
5. Refactor code
6. Repeat
9. HOW - COLORS
Red
Write a little test that doesn‘t work (and perhaps doesn‘t even
compile at first).
Green
Make the test work quickly (committing whatever sins
necessary)
Refactor
Eliminate all of the duplication created in merely getting the test
to work, improve the design.
10. HOW - PRINCIPLES
• Look from usage point of view - Don‘t start with objects (or design, or
...), start with a test.
• Failed test – good test
• First think of the goal, the required functionality.
• Run the test often – very often.
To determine whether you‘ve reached the goal.
To catch any bugs that have crawled back in.
• Make little steps(you can customize size if you want)
11. HOW - METHODOLOGY
Test first – Code last
You may not write production code unless you’ve first
written a failing unit test
Test more – Code more
You may not write more of a unit test than is sufficient to
fail
Test again – Code again
You may not write more production code than is
sufficient to make the failing unit test pass
12. HOW – BEST PRACTICES
• Use special functions for setup and
finitialization of test cases(teardown, etc.)
• Use timeouts
• Maintain test code the same as production code
• Review your tests with team
13. HOW – AVOID
• Tests that depend on system state that was
manipulated from previous test cases
• Dependences between test cases
• Exact timing or performance
• Slow tests
14. HOW - TOOLS
• JUnit (Java)
• XCTest (Objective-C)
• Cpputest(C++)
• PyUnit(Python)
• ScalaTest (Scala or Java)
15. ADVANTAGES
For developers:
Much less debug time
Code proven to meet
requirements
Tests become Safety Net
Eliminate Bug Pong
Rhythm of Success
For business:
Shorter development cycles
Near zero defects
Tests become an asset
Tests are documentation
Competitive advantage!
16. DISADVANTAGES
• Programmers like to code, not to test
• Test writing is time consuming
• Test completeness is difficult to judge
• TDD may not always work
17. CONCLUSIONS
• TDD produces 100% tested/testable code
Ideally, no code should go into production unless it has associated tests
Catch bugs before they are shipped to your customer
Often referred to as “No code without tests”
• Tests determine, or dictate, the code, they are driven by requirements
• TDD let you use tests as
A validation tool
A documentation tool
A design tool
• Increases development speed, because less time is spent chasing bugs.
• Improves code quality because of the increased modularity, and continuous and relentless
refactoring.
• Decreases maintenance costs because the code is easier to follow.
18. CONCLUSIONS
• More code has to be written using TDD but that is not a problem in Software Development
• TDD does not replace traditional testing
It defines a proven way that ensures effective unit testing
Tests are working examples of how to invoke a piece of code
Essentially provides a working specification for the code
• Techniques have to be learned by developers and enforced by managers
• User Interface testing is the hardest
• No application code is written without writing a failing test first.
• It’s all about early problem identification, early find, early fix, reduced cost
Add a test
In test-driven development, each new feature begins with writing a test. To write a test, the developer must clearly understand the feature's specification and requirements. The developer can accomplish this through use cases and user stories to cover the requirements and exception conditions, and can write the test in whatever testing framework is appropriate to the software environment. It could be a modified version of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.
Run all tests and see if the new one fails
This validates that the test harness is working correctly, that the new test does not mistakenly pass without requiring any new code, and that the required feature does not already exist. This step also tests the test itself, in the negative: it rules out the possibility that the new test always passes, and therefore is worthless. The new test should also fail for the expected reason. This step increases the developer's confidence that it is testing the right thing, and passes only in intended cases.
Write some code
The next step is to write some code that causes the test to pass. The new code written at this stage is not perfect and may, for example, pass the test in an inelegant way. That is acceptable because it will be improved and honed in Step 5.
At this point, the only purpose of the written code is to pass the test; no further (and therefore untested) functionality should be predicted nor 'allowed for' at any stage.
Run tests
If all test cases now pass, the programmer can be confident that the new code meets the test requirements, and does not break or degrade any existing features. If they do not, the new code must be adjusted until they do.
Refactor code
The growing code base must be cleaned up regularly during test-driven development. New code can be moved from where it was convenient for passing a test to where it more logically belongs. Duplication must be removed. Object, class, module, variable and method names should clearly represent their current purpose and use, as extra functionality is added. As features are added, method bodies can get longer and other objects larger. They benefit from being split and their parts carefully named to improve readability and maintainability, which will be increasingly valuable later in the software lifecycle. Inheritance hierarchies may be rearranged to be more logical and helpful, and perhaps to benefit from recognised design patterns. There are specific and general guidelines for refactoring and for creating clean code.[6][7] By continually re-running the test cases throughout each refactoring phase, the developer can be confident that process is not altering any existing functionality.
The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to the removal of any duplication between the test code and the production code—for example magic numbers or strings repeated in both to make the test pass in Step 3.
Repeat
Starting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debugging. Continuous integration helps by providing revertible checkpoints. When using external libraries it is important not to make increments that are so small as to be effectively merely testing the library itself,[4] unless there is some reason to believe that the library is buggy or is not sufficiently feature-complete to serve all the needs of the software under development.