A presentation tries to move the discussion on performance testing from a simple, "will it support x users" to a focus on application optimisation.
2. Performance is the number 1 feature 1. Speed 2. Instant Utility 3. Software is Media 4. Less is More 5. Make it Programmable 6. Make it Personal 7. RESTful 8. Discoverabilty 9. Clean 10. Playful
4. Imperceptible differences have an effect 2 Number of searches per day decreases in proportion to the delay Effect persists even after the delay is removed
7. Why do performance testing? So you know, ahead of time, across varying user loads, the system’s Responsiveness Throughput Reliability After all changes that could effect performance and before real users get access to the system So you can Know if it will meet operational objectives, and ... Gauge the effect of architectural decisions Optimize the environment for optimal performance Identify code hotspots Etc ...
8. The effect of architectural decisions Does the application behave the way it was architected? In the context of the transaction are any anti-patterns evident?
9. Environment optimisation Business processes JVM/App Server Garbage Collection Threading Clustering Caching Database Web proxy VM tuning Database Frontend engineering Load balancing Protocol offload TOE SSLisation Storage Misc black boxes
10. Identify code hotspots Where is the transaction spending most time Which component is using the most CPU time Which components are memory hogs
14. Cloud based testing Load injection in the Cloud SilkPerformerCloudBurst Gomez Reality Load LoadRunner in the Cloud Keynote LoadPro Amazon + software Load Test Environment Amazon Rackspace
15. Summary Performance Matters – A lot Even imperceptible performance improvements can make a big difference Performance testing can add a lot of value across the application lifecycle The cloud makes it easy to create and remove test environments and load injectors
Notas del editor
Earlier this year at the Future of Web Apps conference (http://futureofwebapps.com/) in Miami, Fred Wilson, who is with the VC behind Twitter, del.icio.us, FeedBurner, Heyzap, Indeed.com, Tacoda, Oddcast, Disqus, Zemanta, Clickable, Covestor, Etsy, etc, was asked to present his top ten list of what made a great web app. The number one, top of his list, was speed. He said “First and foremost, we believe that speed is more than a feature. Speed is the most important feature.““First and foremost, we believe that speed is more than a feature. Speed is the most important feature. If your application is slow, people won’t use it. I see this more with mainstream users than I do with power users. I think that power users sometimes have a bit of sympathetic eye to the challenges of building really fast web apps, and maybe they’re willing to live with it, but when I look at my wife and kids, they’re my mainstream view of the world. If something is slow, they’re just gone.” – Fred Wilson
This is one of the first performance tests that has actual data (and is not strictly anecdotal)Bing delayed server response by ranges from 50ms to 2000ms for their control group. You can see the results of the tests above. Though the number may seem small it's actually large shifts in usage and when applied over millions can be very significant to usage and revenue. The results of the test were so clear that they ended it earlier than originally planned. The metric Time To Click is quite interesting. Notice that as the delays get longer the Time To Click increases at a more extreme rate (1000ms increases by 1900ms). The theory is that the user gets distracted and unengaged in the page. In other words, they've lost the user's full attention and have to get it back. http://en.oreilly.com/velocity2009/public/schedule/detail/8523
Google's Test: Google ran a similar experiment for where they tested delays ranging from 50ms - 400ms. The chart above shows the impact that it had on users for the 7 weeks they were in the test. The most interesting thing to note was the continued effect the experiment had on users even after it had ended. Some of the users never recovered -- especially those with the greater delay of 400ms. Google tracked the users for an additional 5 weeks (for a total of 12). http://en.oreilly.com/velocity2009/public/schedule/detail/8523
This is the use case on everyone’s mind. What if I launch this application, and it crashes and burns?! What if we run that marketing campaign and it can’t take the additional user load? What if we switch over to this new system and employees can’t do their job? This is such a compelling use case for load testing that it has kind of drowned out the other areas that we are going to talk about. The people that have invested time and money into the application want to know if it is going to work when it goes live. The deliverable that everyone talks about is will the application work when we hit x number of users.
So the performance testing team focuses on the key deliverable, working out if the application will support the number of users that it should. If it does, it gets passed if not, it doesn’t. If an application goes live then goes pear shaped, the performance testing team get to answer all of the hard questions. So it is not surprising that the go/no go thing gets a lot of attention. And not surprising that the performance testing team can be careful about what goes out.
Google performance anti-patterns – first line of the first hit is:- Fixing Performance at the End of the Project (http://highscalability.com/blog/2009/4/5/performance-anti-pattern.html)