This is a presentation I put together for a conference in 2011. It gives a fast, high level view of where Agile Software Development came from, its core values and principles, and its core practices. It is structured as 7 PechaKucha decks in a row, with short breaks in between, which requires high energy, intensity, and a sense of humor. :)
6. The Seven Sections
Memorize this because there WILL be a test!
1.History 5.Roles & People
2.Principles 6.Practices
3.Players 7.User Stories and
more
4.Lifecycle
37. Responding to Change
over
Following a Plan
Welcome changing requirements,
even late in development. Agile
processes harness change for the
customer's competitive advantage.
2:
39. Working Software
over
Comprehensive Documentation
Deliver working software frequently, from a couple of
weeks to a couple of months, with a preference to the
shorter timescale.
2:
40. Iterative Development
Deliver working software frequently, from a couple of
weeks to a couple of months, with a preference to the
shorter timescale.
2:
41. Business people and
developers must work
together daily throughout
the project.
2:
42. Build projects around
motivated individuals. Give
them the environment and
support they need, and
trust them to get the job
done.
2:
43. Individuals and Interactions
over
Processes and Tools
Build projects around motivated individuals. Give them
the environment and support they need, and trust them
to get the job done.
2:
44. The most efficient and
effective method of
conveying information to
and within a development
team is face-to-face
conversation.
2:
45. Co-location
Daily Stand-Up
Retrospectives
The most efficient and effective method of conveying
information to and within a development team is face-to-
face conversation.
2:
48. Agile processes promote
sustainable development.
The sponsors,
developers, and users
should be able to
maintain a constant pace
indefinitely.
2:
49. Agile processes promote
sustainable development.
The sponsors,
developers, and users ab
le
in
ta e
should be able to Sus ac
P
maintain a constant pace
indefinitely.
2:
87. Design & Operation
Inception Iteration 0
Build s / BAU
P r o j e c t
Release Release Release
Features Features Features
MVPs MVPs MVPs
Other Other Other
4:
181. Steven “Doc” List
Agile Coach
Doc@AnotherThought.com
www.StevenList.com
Editor's Notes
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing and Maintenance.\n\nThe unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.\n\nThe waterfall development model originates in the manufacturing and construction industries: highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.\nThe first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[1] though Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model (Royce 1970). This, in fact, is how the term is generally used in writing about software development—to describe a critical view of a commonly used software practice.\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Over the years I have come to describe Test Driven Development in terms of three simple rules. They are:\nYou are not allowed to write any production code unless it is to make a failing unit test pass.\nYou are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.\nYou are not allowed to write any more production code than is sufficient to pass the one failing unit test.\n
Behaviour-Driven Development (BDD) is an evolution in the thinking behind TestDrivenDevelopment and AcceptanceTestDrivenPlanning.\nIt brings together strands from TestDrivenDevelopment and DomainDrivenDesign into an integrated whole, making the relationship between these two powerful approaches to software development more evident.\nIt aims to help focus development on the delivery of prioritised, verifiable business value by providing a common vocabulary (also referred to as a UbiquitousLanguage) that spans the divide between Business and Technology.\nIt presents a framework of activity based on three core principles:\nBusiness and Technology should refer to the same system in the same way - ItsAllBehaviour \nAny system should have an identified, verifiable value to the business - WheresTheBusinessValue \nUp-front analysis, design and planning all have a diminishing return - EnoughIsEnough \nBDD relies on the use of a very specific (and small) vocabulary to minimise miscommunication and to ensure that everyone – the business, developers, testers, analysts and managers – are not only on the same page but using the same words.\n
Feature Driven Development (FDD) is an iterative and incremental software development process. It is one of a number of Agile methods for developing software and forms part of the Agile Alliance. FDD blends a number of industry-recognized best practices into a cohesive whole. These practices are all driven from a client-valued functionality (feature) perspective. Its main purpose is to deliver tangible, working software repeatedly in a timely manner.\n\nFDD was initially devised by Jeff De Luca to meet the specific needs of a 15 month, 50 person software development project at a large Singapore bank in 1997. Jeff De Luca delivered a set of five processes that covered the development of an overall model and the listing, planning, design and building of features. The first process is heavily influenced by Peter Coad´s approach to object modeling. The second process incorporates Peter Coad's ideas of using a feature list to manage functional requirements and development tasks. The other processes and the blending of the processes into a cohesive whole is a result of Jeff De Luca's experience. Since its successful use on the Singapore project there have been several implementations of FDD.\nThe description of FDD was first introduced to the world in Chapter 6 of the book Java Modeling in Color with UML[1] by Peter Coad, Eric Lefebvre and Jeff De Luca in 1999. In Stephen Palmer and Mac Felsing´s book A Practical Guide to Feature-Driven Development[2] (published in 2002) a more general description of FDD, decoupled from java modeling in color, is given.\nThe original and latest FDD processes can be found on Jeff De Luca´s website under the ´Article´ area. There is also a Community website available at which people can learn more about FDD, questions can be asked, and experiences and the processes themselves are discussed.\n
\n
\n
Acceptance Test Driven Development (ATDD) is a practice in which the whole team collaboratively discusses acceptance criteria, with examples, and then distills them into a set of concrete acceptance tests before development begins. It’s the best way I know to ensure that we all have the same shared understanding of what it is we’re actually building. It’s also the best way I know to ensure we have a shared definition of Done.\n
\n
\n
\n
\n
\n
Origins in XP\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
Negotiable... and Negotiated\nA good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the card may acquire notes, test ideas, and so on, but we don't need these to prioritize or schedule stories.\n\n
Valuable\nA story needs to be valuable. We don't care about value to just anybody; it needs to be valuable to the customer. Developers may have (legitimate) concerns, but these framed in a way that makes the customer perceive them as important.\nThis is especially an issue when splitting stories. Think of a whole story as a multi-layer cake, e.g., a network layer, a persistence layer, a logic layer, and a presentation layer. When we split a story, we're serving up only part of that cake. We want to give the customer the essence of the whole cake, and the best way is to slice vertically through the layers. Developers often have an inclination to work on only one layer at a time (and get it "right"); but a full database layer (for example) has little value to the customer if there's no presentation layer.\nMaking each slice valuable to the customer supports XP's pay-as-you-go attitude toward infrastructure.\n\n
A good story can be estimated. We don't need an exact estimate, but just enough to help the customer rank and schedule the story's implementation. Being estimable is partly a function of being negotiated, as it's hard to estimate a story we don't understand. It is also a function of size: bigger stories are harder to estimate. Finally, it's a function of the team: what's easy to estimate will vary depending on the team's experience. (Sometimes a team may have to split a story into a (time-boxed) "spike" that will give the team enough information to make a decent estimate, and the rest of the story that will actually implement the desired feature.)\n\n
Good stories tend to be small. Stories typically represent at most a few person-weeks worth of work. (Some teams restrict them to a few person-days of work.) Above this size, and it seems to be too hard to know what's in the story's scope. Saying, "it would take me more than month" often implicitly adds, "as I don't understand what-all it would entail." Smaller stories tend to get more accurate estimates.\nStory descriptions can be small too (and putting them on an index card helps make that happen). Alistair Cockburn described the cards as tokens promising a future conversation. Remember, the details can be elaborated through conversations with the customer.\n\n
A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." Several teams have reported that by requiring customer tests before implementing a story, the team is more productive. "Testability" has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met.\nIf a customer doesn't know how to test something, this may indicate that the story isn't clear enough, or that it doesn't reflect something valuable to them, or that the customer just needs help in testing.\nA team can treat non-functional requirements (such as performance and usability) as things that need to be tested. Figure out how to operationalize these tests will help the team learn the true needs.\n\n