In the past several years the software development lifecycle has changed significantly with high-speed software releases, shared application services, and platform virtualization. The traditional performance assurance approach of pre-release testing does not address these innovations. To maintain confidence in acceptable performance in production, pre-release testing must be augmented with in-production performance monitoring. Obbie Pet describes three types of monitors—performance, resource, and VM platform—and three critical metrics fundamental to isolating performance problems—response time, transaction rate, and error rate. Obbie reviews techniques to acquire and interpret these metrics, and describes how to develop a continuous performance monitoring process. In conjunction with pre-release testing, this monitoring can be woven into a single integrated process, offering a best bet in assuring performance in today’s development world. Take away this integrated process for consideration in your own shop.
2. Obbie Pet
Ticketmaster
Obbie Pet has twenty years of experience in QA, as tester, test lead, and QA
manager. For the past thirteen years, Obbie has focused on performance testing in
the area of Internet ticketing (Tickets.com, LiveNation, Ticketmaster) and in the
insurance field (Wellpoint). He is certified on the LoadRunner testing tool by both
Mercury and HP. Five years ago, Obbie realized that achieving performance assurance
in production required expanding beyond just testing to include performance
monitoring. His focus now is sharing these ideas and implementing monitoring
solutions in support of performance assurance. Read more about Obbie and his
thoughts on performance assurance at QAStrategy.com.
4. Obbie Pet
Sr Performance Engineer
Ticketmaster.com
Internet ticketing space 10+ years
Ticketmaster Blog: http://tech.ticketmaster.com/
Contact: ObbiePet@ticketmaster.com
or opet@qastrategy.com
5. Ticketmaster
• 450M Tickets processed (annual)
• 180K Events (annual)
• 12K clients
• 19 countries
• >8M mobile tickets in 2012
6. Agenda
The problem
– Old paradigm SDLC
– New paradigm SDLC
– Old paradigm’s test tool isn’t cutting it.
The solution
– Need a second tool - Monitoring
– Monitoring use case
– Monitoring dashboard examples
Takeways
– Performance Assurance process for your shop
7. Old SDLC Paradigm
• Quarterly releases, low velocity change
• Monolithic websites and applications
• Dedicated hardware
8. Old SDLC performance assurance
• Pre-Release testing
• Production candidate staged
• High demand is simulated
• Problems are detected and fixed
• Executed before every release
• Works great for old SDLC
• Newtons 3rd law
15. Shared services – explained 5 of 5
• Imagine 5, 10, or 50 applications concurrently running.
• It becomes hard to predict the demand on shared services.
16. Platform capacity contrast
Application characteristics Old Paradigm New Paradigm
Software release frequency,
Software component change frequency
+----+----+----+----
Every quarter
++++++++++++
Continuous
Shared services Minimal,
predictable
demand
Extensive,
unpredictable demand
Platform capacity Fixed CPU,
dedicated h/w
Variable CPU,
cloud implementation
17. Does pre-release testing mitigate new paradigm risk?
Application characteristics New Paradigm Addressed by
Pre-Release testing?
Software release frequency,
Software component change frequency
++++++++++++
Continuous
No. Code your
dependent upon is
changing between
you’re releases
Shared services Extensive,
unpredictable
demand
No. Doesn’t cover
impact of other apps
on shared resources
Platform capacity Variable,
cloud implementation
No. Doesn’t expose
under allocation of
Production CPU
18. How do we mitigate the new risk?
AUGMENT PRE-PRELEASE TESTING WITH MONITORING.
• Monitoring, most importantly performance monitoring, can
mitigate performance risk associated with the new
paradigm.
19. How does monitoring mitigate these new risks?
Application characteristics New Paradigm Risk Mitigation
via Monitoring
Software release frequency,
Software component change frequency
++++++++++++
Continuous
Provide immediate
feedback when a
s/w change has
impacted a
component.
Shared services Extensive,
unpredictable
demand
Demand patterns on
all services are
observed, and can
be managed.
Platform capacity Variable,
cloud
implementation
Metrics used to
prevent application
starvation of
CPU/Memory/Disk/
Network
20. What are the limitations of monitoring?
• Monitoring doesn’t remove performance risks like
Pre-Release testing.
• Reactive vs proactive.
• It’s the best approach I can think of.
21. How does Production monitoring reduce risk?
• Prevent performance issues from entering production (when
monitoring tools are used with Pre-Release testing)
• Early detection/remediation of performance issues, before the
customer notices.
• Fast resolution of performance issues reported by customers.
• Visibility for Operations
23. Production monitoring use case
• Customers are complaining the web site is too slow.
• Where is the problem? How does IT respond?
Assemble a team of experts from each tier (bad)
VS
Monitored metrics immediately identify
the broken tier (good).
24. Production monitoring use case
• Figuring out where the problem is located is 80% of the effort.
• Performance monitoring does this work for us. That’s why its
valuable!
25. Production monitoring use case
• Diagnostic strategy:
• First isolate problem tier: left to right ↔ ;
• than isolate problem layer within the tier: up and down ↕
26. What metrics do we measure?
Performance (business) metrics for tier isolation
most important, hardest to capture
• Transaction response times
• Transaction rates
• Error rates
Resource metrics for layer isolation
• CPU
• Memory
• etc…
VM metrics
Production monitoring use case
27. What does monitoring look like?
● Target of test –
An email notification system,
multiple services orchestrated into a product.
● Performance dashboard examples from a test run.
● Performance dashboard examples from production.
35. Dashboard examples from production
• Dashboard 1:
How is the PDF rendering service performing?
• Dashboard 2:
Is Production email being delivered within SLA’s?
38. What does monitoring look like?
• We looked at performance dashboards for
• a healthy test
• an unhealthy test
• unhealthy tier isolation
• production
39. Performance metrics are hard to come by
Presentation to date assumes you got’em
If you aren’t collecting them already…
...its a big task to get them.
• Buy vs build
41. Performance MONITORING in your company
• Stick - Require dashboards for production release
• Carrot - Provide easily adapted dashboard template
All dashboards based on same three key metrics
Build enterprise infrastructure to support templates
• Project owners could use them as is or customize
• Little burden means minimal resistance
43. How do you implement Performance ASSURANCE
• Prior to production release, the following performance
requirements would need to be satisfied.
• Pre-release Testing
• Peak period scenario
• Ramp up to first bottleneck test
• Performance Monitoring dashboards
• Lets talk about Pre-release testing, the other arrow in our
quiver.
44. Requirements - Pre-Release Testing
• Peak period scenario
• Stakeholder provides SLA for peak production demand
• Performance test simulates peak production demand
while human users exercise the system to confirm
acceptable user experience
• PASS is given when SLA is met OR stakeholders accept
results.
45. • Ramp up to first bottleneck test
• Load is ramped up until the first significant bottleneck is
found.
• Useful for Ops to anticipate performance issues in
production.
• Provide information for the biz to create formal SLA’s.
Requirements - Pre-Release Testing
46. • Performance Monitoring dashboards
• Performance monitoring is operational
• Resource monitoring is operational
• VM monitoring is operational
Requirements - Performance Monitoring Dashboards
47. • Dashboards available in both Pre-PROD and PROD
environments
• Pre-PROD coverage provides feedback in Pre-Release
testing.
• Pre-PROD coverage provides a testing ground for prod
coverage.
• Getting the right coverage will take practice.
Requirements - Performance Monitoring Dashboards
48. • Prior to production release, the following performance
requirements would need to be satisfied.
• Pre-release Testing
• Peak period scenario
• Ramp up to first bottleneck test
• Performance Monitoring dashboards
How do you implement Performance ASSURANCE
49. • We talked about two techniques of mitigating risk.
• Pre-Release testing
• Monitoring
• In practice, how does this theory map to performance
problems?
• My experience says →
Mitigating performance risk
52. Obbie Pet
Sr Performance Engineer
Ticketmaster.com
Internet ticketing space 10+ years
Ticketmaster Blog: http://tech.ticketmaster.com/
Contact: ObbiePet@ticketmaster.com
or opet@qastrategy.com
55. Performance is important
● Latency matters. Amazon found every 100ms of latency cost
them 1% in sales. Google found an extra .5 seconds in search
page generation time dropped traffic by 20%.
--http://blog.gigaspaces.com/amazon-found-every-100ms-of-latency-cost-them-1-in-sales/
62. Example of a performance log file used by Kibana to generate
performance metrics:
•
• datetime="2013-12-12 11:59:59,538" severity="INFO "
host="app6.template.jetcap1.coresys.tmcs" service_version=""
client_host="10.72.4.75" client_version="" Correlation-
ID="ab72d037-6362-11e3-80a4-f5667a7a5c6b" rid="" sid=""
thread="Camel (337-camel-88) thread #42 - ServiceResolver"
category="com.ticketmaster.platform.bam.strategies.Performa
nceBAMStrategy" datetime="2013-12-12T11:59:59.538-08:00"
bam="perf" dur="724" activity="template-call"
camelhttppath="/template-notify-composite/rest/template-
notify"
63. Performance monitoring - Technologies
• Selecting an appropriate monitoring technology is highly
dependant on your specific environment. Below I share the
classes of monitoring technologies to consider for your solution.
BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage,
BUILD IT: Custom code needed to collect metrics, open source leveraged for metric storage,
analysis and reportingnd reporting
SysLog harvesting, custom code is used to push performance data to SysLogs, which are than
digested by log analyzers. (Kibana, Splunk)
Tcollector agents, performance information is pushed to a time series database (OpenTSDB)
BUY IT: End-2-End vendor monitoring solutions
Network
sniffers
Network monitors or sniffer (OpNet)
Stitching agent deployment required, piecing together transaction parts from header info
Transaction
marking
agent deployment required, insert and then track headers
JVM monitors agent deployment usually required (Dynatrace, AppDynamics)
64. E2E Monitoring tool vendors:
• BlueStripe http://bluestripe.com/
• Op-Tier http://www.optier.com/
• AppDynamics http://www.appdynamics.com/
• Dynatrace
• http://www.compuware.com/application-performance-
management/dynatrace-enterprise.html
• Gartner group does a nice evaluation of this tool space