Every company is under increased pressure to deliver software faster and better. The question is: “How do I get started?” Continuous firefighting is definitely not the answer!
XebiaLabs and Dynatrace share a practical step-by-step approach to optimizing your delivery process so you can deploy better quality software faster!
Learn:
• Why you should move to a metric-driven pipeline!
• Which key quality metrics to measure and how to integrate them to catch problems earlier
• How to use, measure and report on these metrics
• How finding architectural/quality issues earlier reduces cost spent investigating them
How to Build a Metrics-optimized Software Delivery Pipeline
1. 1 COMPANY CONFIDENTIAL – DO NOT DISTRIBUTE #APMLive
Building a Metrics
Optimized Pipeline
Andi Grabner
Performance Advocate
@grabnerandi
Andrew Phillips
VP DevOps Strategy
XebiaLabs
2. 1 Today’s Unicorn • Recap from Velocity & PERFORM 2015
• Why we don’t have to be like Facebook
2 Why Unicorns Excel? • Speed through automation
• Quality through metrics
3 Metrics and approach • Service, business & user metrics
• Pre-Prod - from integration and load
tests
• Prod - from real-users
4 Dynatrace and
XL Release by
XebiaLabs
• Building a metric-driven pipeline
Today’s agenda
4. 700 deployments / year
10 + deployments / day
50 – 60 deployments /
day
Every 11.6 seconds
5.
6. • Waterfall agile: 3 years
• 220 Apps - 1 deployment per month
• “EVERY manual tester does automation”
• “We don’t log bugs. We fix them.”
• Measures are built in & visible to everyone
• Promote your wins! Educate your peers.
• EVERYONE can do continuous delivery.
10. It‘s not about blind automation of pushing more
bad code through a shiny pipeline
11. 879! SQL Queries
8!Missing CSS & JS Files
340!Calls to GetItemById
Example #1: Public Website based on SharePoint
12. Example #2: Migrated to (Micro)-Services
26.7s
Execution Time 33!Calls to the
same Web Service
171!SQL Queries through
LINQ by this Web Service –
request similar data for each call
Architecture Violation: Direct
access to DB from frontend logic
13. Key App Metrics
# SQL (INS, UPD, DEL, …)
# LOGs
# API Calls (Hibernate, …)
# Exceptions
Execution Time
18. 18
What you currently measure
Quality Metrics in
Continuous Delivery
# Test Failures
Overall Duration
What you should
measure
# Log Messages
# HTTP 4xx/5xx
Request/Response Size
Page Load/Rendering Time
…
Execution Time per test
# calls to API
# executed SQL statements
# Web Service Calls
# JMS Messages
# Objects Allocated
# Exceptions
19. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build # Test Case Status
Test & Monitoring Framework Results
20. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build # Test Case Status
Test & Monitoring Framework Results
We identified a regression
21. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status
Test & Monitoring Framework Results
Problem solved
22. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
Test & Monitoring Framework Results Architectural Data
Let’s look behind the
scenes
23. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
12 0 120ms
3 1 68ms
Test & Monitoring Framework Results Architectural Data
Let’s look behind the
scenes
24. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
12 0 120ms
3 1 68ms
12 5 60ms
3 1 68ms
Test & Monitoring Framework Results Architectural Data
Exceptions probably reason for
failed tests
Let’s look behind the
scenes
25. Measures from your Tests in Action
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
12 0 120ms
3 1 68ms
12 5 60ms
3 1 68ms
75 0 230ms
3 1 68ms
Test & Monitoring Framework Results Architectural Data
Problem fixed but now we have an
architectural regression
Problem fixed but now we have an
architectural regression
Let’s look behind the
scenes
26. Measures from your Tests in Action
12 0 120ms
3 1 68ms
Build 20 testPurchase OK
testSearch OK
Build 17 testPurchase OK
testSearch OK
Build 18 testPurchase FAILED
testSearch OK
Build 19 testPurchase OK
testSearch OK
Build # Test Case Status # SQL # Excep CPU
12 0 120ms
3 1 68ms
12 5 60ms
3 1 68ms
75 0 230ms
3 1 68ms
Test & Monitoring Framework Results Architectural Data
Now we have the functional and
architectural confidence
Let’s look behind the
scenes
27. One goal: deliver better features to customers faster
Two fundamental components: speed + quality
28. When, and how, should we measure?
Not just in production!
Measure as early as possible in your development and
delivery process
Fast feedback is both more effective and cheaper at
identifying and fixing problems
Reuse what you already have - automated “functional”
tests but convert them into Architectural, Performance
and Scalability Validation Tests
29. When, and how, shall we measure?
Build/CI Integration & Perf User
Analysis Analysis Analysis
Service-level Service-level
Business application
Service-level
Business application
User experience
Feedback Loop
32. • XebiaLabs XL Release is a pipeline orchestrator
• It allows you to define, execute, track and improve all the tasks in your
delivery pipeline
• All = Automated + manual, technical + process-oriented
• Insight, visibility and control into people and tools
Orchestrating your delivery pipeline with XL Release
33. XL Release and Dynatrace
XL Release by XebiaLabs allows you to integrate
Dynatrace quality metrics into your overall Continuous
Delivery pipeline
Automatically verify architectural quality during your
integration testing
Trigger and review performance monitoring results
Automatically register releases in Dynatrace to allow for
“before/after” comparisons of user behavior
Deliver better software and close the Continuous
Delivery feedback loop!
34. Quality metrics in your unit & integration tests
Architectural
Quality Metrics from
Unit & Integration Tests
35. #1: Analyzing every Unit &
Integration test
#2: Metrics for each test
#3: Detecting regression
based on measure
Unit / Integration Tests are auto baselined!
Regressions auto-detected!
37. Beyond unit & integration tests
Architectural
Quality Metrics from
Unit & Integration Tests
Performance Metrics
From Load Tests
Deployment Marker
to support
Production
Monitoring
38. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
39. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
#2: Which TIERS have a problem?
40. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
#2: Which TIERS have a problem?
#3: Is it the DATABASE?
41. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
#2: Which TIERS have a problem?
#3: Is it the DATABASE?
#4: How HEALTHY is the
JVM and the Host?
42. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
#2: Which TIERS have a problem?
#3: Is it the DATABASE?
#4: How HEALTHY is the
JVM and the Host?
#5: Do we have
any ERRORS?
43. Load tests: finding hotspots per test easily
#1: Analyze Load Testing Results by
Timer Name, Script Name, …
#2: Which TIERS have a problem?
#3: Is it the DATABASE?
#4: How HEALTHY is the
JVM and the Host?
#5: Do we have
any ERRORS?
#6: LOAD vs. RESPONSE
Time over time?
44. Load tests: We can compare two runs
Did we get better or worse?
DB, Web Requests, API, …
Did we get better or worse?
DB, Web Requests, API …
48. A metric-driven Continuous Delivery pipeline
Architectural
Quality Metrics from
Unit & Integration Tests
Performance Metrics
from Load Tests
Deployment Marker
to support
Production Monitoring
Review user behavior
through UEM Data
50. Level Up with Dynatrace Free Trial & Personal
http://bit.ly/dtpersonal
51. Download XL Release from XebiaLabs
Automate, orchestrate and get visibility into your
release pipelines - at enterprise scale!
Start a Free 30-Day Trial Today!
http://bit.ly/TryXLRelease
55. Summary
Continuous Delivery = Speed + Quality
3 levels of metrics: service, business and user
Dynatrace can already collect these metrics for you
Get started today: incorporate quality metrics into your
release pipeline straight away using tools like
XL Release by XebiaLabs
56. One goal: deliver better features to customers faster
Two fundamental components: speed + quality
57. Resources
Check out XebiaLabs &
Dynatrace Blogs and
Plugins
http://bit.ly/dtpersonal
Click Here
Test drive the
Dynatrace
Personal License
Click Here
Try
by
http://bit.ly/XLGuide
http://bit.ly/TryXLRelease
http://bit.ly/XLDynatrace
58. Thank You!
Time for Q & A
Andrew Phillips
VP DevOps Strategy
XebiaLabs
http://blog.xebialabs.com/
Andi Grabner
@grabnerandi
http://blog.dynatrace.com
59. Participate in our Forum ::
• community.dynatrace.com
Like us on Facebook ::
• facebook.com/dynatrace
• facebook.com/xebialabs
Follow us on LinkedIn ::
• linkedin.com/company/dynatrace
• linkedin.com/company/xebialabs
Connect with us!
Follow us on Twitter ::
• twitter.com/dynatrace
• twitter.com/xebialabs
Watch our Videos & Demos ::
• youtube.com/dynatrace
• youtube.com/xebialabs
Read our Blog ::
• application-performance-blog.com
• blog.xebialabs.com