In a conventional project, we focus on the functionality that needs to be delivered.
Performance might be important, but performance requirements are considered quite separate from functional requirements.
One approach is to attach “conditions” to story cards, i.e. this functionality must handle a certain load.
In our experience, where performance is of critical conern, pull out the performance requirement as its own story…
Calling out performance requirements as their own stories allows you to:
validate the benefit you expect from delivering the performance
-prioritise performance work against other requirements
-know when you’re done
not sure if you like this picture, I was really looking for a good shot looking out over no-man’s land at the Berlin wall.
I want the idea of divisions along skill lines breading hostility and un-cooperation.
Everything should be based on some foreseeable scenario, and who benefits from it
Harder to do without repetition (involvement and feedback) [not sure if this makes sense anymore]
Extremely important to keep people focused as its easy to drift
Capture different profiles
Separation simulation from optimisation -> Problem Identification vs Problem Resolution (or broken down further Solution Brainstorm -> Solution Investigation)
Linking back to why is even more essential -> map to existing problems or fears
Latency vs throughput -> determine what is the most useful metric and define service level agreements
http://www.flickr.com/photos/denniskatinas/2183690848/
Not sure which one you like better
Here’s an example... (in the style of Feature Injection) “What’s our upper limit?”
Here’s another example... (in the style of Feature Injection), “Can we handle peaks in traffic again?”
So that we have confidence in meeting our SLAAs the Operations ManagerI want to ensure that a sustained peak load does not take out our service
It helps to be clear about who is going to benefit from any performance testing (tuning and optimisation) that is going to take place. Ensure that they get a stake on prioritisation that will help with the next point...
Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
Evidence-based decision-making. Don’t commit to a code change until you know it’s the right thing to do.
It helps to have the customer (mentioned in the previous slide) be a key stakeholder to prioritise.
Application supports better ability to be performance tested easier
Like TDD changes the design/architecture of a system
Need to find reference for this
Measuring it early helps raise what changes contribute to slowness
Performance work takes longer
Lead times potentially large and long lead time (sequential) – think of where gantt chart may actually be useful
Run it as a parallel track of work to normal functionality (not sequential)
Minimal environment availability (expensive, non concurrent use)
Need minimal functionality or at least clearly defined interfaces to operate against
Want to have some time to respond to feedback -> work that into the process as early as possible and potentially change architecture/design
Start with the simplest performance test scenarios
-> Sanity test/smoke test
-> Hit all aspects
-> Use to drive out automated deployment (environment limitations, configuration issues, minimal set of reporting needs – green/red)
-> Hit integration boundaries but with a small problem rather than everything
Next story might be a more complex script or something that drives out more of the infrastrcutre
Performance stories should not be :
-> Build out tasks
-> Does not enhance anything without other stories
Log files -> Contents early. Consumer Driven. Contracts for analysis. Keep around. Keep notes around what was varied
INVEST stories
Avoid the large “performance test” story
Separate types of stories
Optimise vs Measure
Optimise is riskier components. Less known. “Done” is difficult to estimate
Measure is clearer. Allows you to make better informed choices
Know when to stop
When enough is enough
The best lessons are learned from iterating, not from incrementing. Iterate over your performance test harness, framework and test fixtures. Make it easier to increment into new areas by incrementing in a different direction each time.
- Start with simple performance test scenarios
- Don’t build too much infrastructure at once
- Refine the test harness and things used to create more tests
- Should always be delivering value
- Identify useful features in performance testing and involve the stakeholder(s) to help prioritise them in
Prioritise and schedule in analysis stories (metrics and graphs)
Some of this work will still be big
Sashimi is nice and bite sized. You don’t eat the entire fish at once. You’re eating a part of it. Sashimi slices are nice and thin. There are a couple of different strategies linking this in.
Think of sashimi as the thinnest possible slice.
Number of requests over time
Latency over time
“I don’t want to click through to each graph”
“I don’t want to click through to each graph”
“I don’t want to click through to each graph”
Automated build is a key XP practice.
The first stage of automating a build is often to automate compilation
However, for a typical project, we go on after compilation to run tests, as another automated step.
In fact we may have a whole series of automted steps that chain on after each other, automating many aspects of the development process, all the way from compiling source to to deploying a complete application into the production environment.
Automation is powerful lever in software projects because:
it gives us reproducable, consistent processes
We get faster feedback when something goes wrong
Overall higher productivity – we can repeat an automated build much more often than we could if it was manual
In performance testing we can use automate many of the common tasks in a similar way to how we automate a software build.
For any performance test, there is a linear series of activities that can be automated (first row of slide)
In our recent projects we’ve been using the build tool ant for most of performance scripting. You could use any scripting language, but here are some very basic scripts to show you the kind of thing we mean… [possibly animate transitions to the 4 following slides]
Once we’ve auomted the running of a single test, we can move on even more aspects of automation such as scheduling and result archiving, whch lead us into…
Continuous Performance testing.
Performance tests can take a long time to run, you need all the time you can to get good results.
Lean on your automation to have tests running all the time, automatically using more hardware when available (in the evening or at the weekend for example)
For a faster feedback, set up your CI server so that performance tests are always running against the latest version of the application.