Addressing Performance Testing Challenges in Agile: Webinar Q&A
1. Webinar: Addressing Performance Testing Challenges in Agile: Process and
Tools
July 03, 2013
Questions and Answers from the session
Q. Could you please explain best practices of Performance testing?
A. This requires a detailed explanation. The best practices may vary from application
to application. However, here are a few generic proven practices that we
recommend:
1. The test environment should be as similar to production as possible
2. Use the test data that is most up to date with the production data
3. Business flows should be captured and workload modelling should be done based
on the production usage
4. Think times should be properly used in test scripts
5. Start the performance testing as early as possible
6. Use real time monitoring utilities during performance tests
7. Consider 90th percentile response times
Q. How do you approach an unfortunate scenario where the software item in the
current sprint passes tests in isolation but fails when run concurrently with loads
from other software items? The sprint cannot be verified as it has failed.
A. In this situation, the tests are successful in isolation but fail in an integrated
environment. This would help in identifying the performance issues because of
different pieces of software working together which is the most likely scenario in
production. To mitigate this risk, integrated performance tests can be executed in
dev/QA environment where the build is promoted on periodic frequency.
2. Q. Can we integrate Loadrunner scripts with CI? How can we accommodate
frequent changes to system?
A. To integrate with CI, a tool should provide command line interface for test
execution. Since, Load Runner provides a command line interface it should be
possible to integrate it with CI tool. Scheduling is also possible for Loadrunner
scripts. In order to accommodate frequent changes to system, the tool should
provide features to update script by inserting new requests or modifying the existing
ones. If there are major changes, a re-recording is always recommended.
Q. How can you avoid assuming linear scalability of a system when only conducting
end-to-end performance testing on a limited number of physical assets (i.e.
servers)?
A. To avoid assuming this, it is recommended to simulate a step up load test and
study the response time pattern. If this increases in a linear fashion, a pattern can be
formulated.
Q. Most of the latest technologies these days (like silverlight, html5 etc) use client
side processing. So what is the best approach to capture client-side rendering time?
A. Tools like Yslow would provide good details on the client side responses. Other
browser based tools like Firebug, HTTP Watch, etc. can also be used. Another
popular tool to test client side performance is Dynatrace Ajax Edition. It provides
record and playback functionality to identify issues with HTML page rendering,
loading page components, Java script execution etc. Custom utilities can also be
created using the developer tools provided by Chrome.
Q. Do you recommend performance tuning for bugs or tasks at story level?
A. Yes. We can create specific stories for performance issues or tuning so that we can
track it during the Sprint.
Q. How many performance engineering issues/defects were found during CPM (vs.
the hardening sprint at the end?)
A. The objective of CPM is to have an automated performance test execution that
can be triggered with each dev/QA build to measure performance regression. It
covers issues related to memory leaks, connection leaks,etc. because of bad
programming practices. While hardening Sprint is more of performance certification
towards the end of development in a dedicated performance environment. In some
3. of our engagements, we have been able to identify few critical performance defects
during Sprint development itself that required design and code changes.
Q. How many additional staff were needed for CPM in the example of the Impetus
customer project?
A. The entire CPM set up was performed by a team of 2 performance engineers.
They were responsible for automating the performance test data creation, test
execution and result analysis.
Q. The monitoring tools capture server metrics at interval of minimum 5
minutes, but during performance test execution we want to capture metrics
at an interval of 5-8 seconds. What tools do you suggest to capture metrics
for these?
A. Many of the enterprise performance testing tools like Load Runner, Silk
Performer, SandStorm etc. offers integrated monitoring capabilities. These can
be configured to monitor at a specified frequency in seconds or minutes. Apart
from that, each OS provides utilities for monitoring. For e.g. Windows provides
Perfmon, Unix provides command line tools like top, vmstat etc. All enterprise
application and database servers also provide their consoles for real time
monitoring. Apart from these there are tools like Nagios, Zabbix, Ganglia that
provide monitoring options for a range of servers.
Q. If we have to do the performance testing of unit test code then we need a
lot of stubs and drivers which will take huge time for developing those. How
you can consider this a best approach?
A. We have found that these stubs and drivers are reusable across dev and QA
environment also if the external systems are not available for testing. Even to
do a unit test in dev environment these stubs are required. So, initially, it takes
time to build these stubs but they provide a huge ROI in later phases. Unit
testing of code can also be performance using j-units as latest version of junit
framework provide features to test the unit with concurrent threads. So, the
existing unit test environment can be leveraged for performance unit testing.
4. Q. How do we simulate a production like environment in development
phase?
A. I agree that production like environment cannot be simulated for dev
environment because of cost factors. But, the objective of running
performance tests in dev environment is not get into absolute numbers, but to
establish a performance comparison for incremental builds and make sure that
there is no regression. Analyzing the delta in test results depicts performance
issues in the system. In few cases, we have also seen that dev environment is a
scaled down version of production environment and some mathematical
models are used for result extrapolation.
Q. How does this approach work when the programmer and the performance
engineer is the same person?
A. In case both the programmer and performance engineer is the same person,
there is not much change in approach. In fact this will help because the
programmer being a performance engineer will also focus on performance. He
can also create junits and use it to measure the performance locally. The
person can take tasks for development as well as creating performance test
scripts for the features he is developing. He can also use available profilers to
make sure that there are no issues in the code related to performance and
scalability.
Q. Is anything different when developing Eclipse-based desktop software?
A. The process remains the same. It can be applied to any type of technology
or application. The tool set would change depending on these 2 factors. Unit
testing is inherent part of every development. So, even while developing
Eclipse based desktop software, performance requirements will change. The
focus will be on code performance and not on virtual user concurrency. As
developer you need to make sure that the software works fine, gives prompt
response and doesn’t crash. Tools like TPTP (Eclipse plug-in) can be used during
the development.
Write to us at inquiry@impetus.com for more information