There’s no one-size-fits-all approach to metrics. In this session, Cliff Crocker and I walk through various metrics that answer performance questions from multiple perspectives — from designer and DevOps to CRO and CEO. You’ll walk away with a better understanding of your options, as well as a clear understanding of how to choose the right metric for the right audience.
8. Start render DNS TCP TTFB
DOM loading DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
22. Technology for collecting performance metrics
directly from the end user’s browser
Involves instrumenting your site via JavaScript
Measurements are fired across the network to a
collection point through a small request object
(beacon)
23. What makes RUM great
Always on
Every user, every browser, everywhere
Able to capture human behavior/events
Only getting better
30. Uses automated agents (bots) to measure your
website from different physical locations
A set “path” or URL is defined
Tests are run either ad hoc or scheduled,
and data is collected
31. What makes synthetic monitoring great
Rich data collected (waterfall, video,
filmstrip, HTTP headers)
Consistent “clean room” baseline
Nothing to install
Doesn’t require users / ability to
measure pre-production and
competition
51. I need visibility into…
issues with authoritative DNS servers
impact of denial of service attacks
on end users
efficiency of connection re-use
tier 1 connectivity issues (load balancer,
web server)
52. Start render DNS TCP TTFB
DOM loading DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
53. Measuring DNS and TCP
function getPerf() {
var timing = window.performance.timing;
return {
dns: timing.domainLookupEnd -
timing.domainLookupStart};
connect: timing.connectEnd - timing.connectStart};
}
54. What’s with all those zeros!
Connection reuse
DNS caching
Prefetching
57. How do I…
understand the impact of back-end
versus front-end on page speed?
investigate how DOM complexity affects
performance?
measure a “fully loaded” page?
58. Start render DNS TCP TTFB
DOM load event DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
65. Understanding critical rendering path
DOM Loading – browser begins parsing initial
bytes of the document
DOM Interactive – doc parsed, DOM has been
constructed
DOM Content Loaded – No blocking style sheets
DOM Complete – All processing complete, all
assets downloaded
https://developers.google.com/web/fundamentals/performance/critical-rendering-path
68. I need to understand…
how third-party content affects my
performance
how long a group of assets takes to
download
how assets served by a CDN are
performing
69. Start render DNS TCP TTFB
DOM loading DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
71. Browser support for Resource Timing
http://caniuse.com/#feat=resource-timing
72. Cross-Origin Resource Sharing (CORS)
Start/End time only unless Timing-Allow-Origin
HTTP Response Header defined
Timing-Allow-Origin = "Timing-Allow-Origin" ":" origin-list-
or-null | "*"
73. ResourceTiming
var rUrl = ‘http://www.akamai.com/images/img/cliff-crocker-
dualtone-150x150.png’;
var me = performance.getEntriesByName(rUrl)[0];
var timings = {
loadTime: me.duration,
dns: me.domainLookupEnd - me.domainLookupStart,
tcp: me.connectEnd - me.connectStart,
waiting: me.responseStart - me.requestStart,
fetch: me.responseEnd - me.responseStart
}
Measuring a single resource
74. Other uses for ResourceTiming
Slowest resources
Time to first image (download)
Response time by domain
Time a group of assets
Response time by initiator type (element type)
Cache-hit ratio for resources
For examples see: https://github.com/lognormal/beyond-page-metrics
75. Using Resource Groups
PLT impact not due
resource groups
PLT impact
correlates with
improvement
from image domains
77. I just need to understand…
when users perceive the page to
be ready
how long until my page begins
to render
when content above the fold is visible
78. Start render DNS TCP TTFB
DOM loading DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
79. The fallacy of “First Paint” in the wild
Support for First Paint is not standardized between browsers
Metric can be misleading (i.e. painting a white screen)
80. First Paint is not equal to Start Render!
Chrome – “First Paint” True Start Render
82. UserTiming Interface
Allows developers to measure performance of
their applications through high-precision
timestamps
Consists of “marks” and “measures”
PerformanceMark: Timestamp
PerformanceMeasure: Duration between
two given marks
84. Measure duration between two marks
performance.mark(“startTask”);
//Some stuff you want to measure happens here
performance.mark(“stopTask”);
//Measure the duration between the two marks
performance.measure(“taskDuration”,“startTask”,“stopTask”);
85. How long does it
take to display
the main product
image on my site?
86. Record when an img loads
<img src=“image1.gif” onload=“performance.mark(‘image1’)”>
For more interesting examples, see:
Measure hero image delay
http://www.stevesouders.com/blog/2015/05/12/hero-image-custom-metrics/
Measure aggregate times to get an “above fold time”
http://blog.patrickmeenan.com/2013/07/measuring-performance-of-user-
experience.html
94. Measuring SPAs
• Accept the fact that onload no longer matters
• Tie into routing events of SPA framework
• Measure downloads during soft refreshes
• (support in boomerang.js for Angular and other
SPA frameworks)
97. I need to understand…
how performance affects business KPIs
how our site compares to our
competitors
98. Start render DNS TCP TTFB
DOM loading DOM ready Page load Fully loaded
User timing Resource timing Requests Bytes in
Speed Index Pagespeed score 1s = $$ DOM elements
DOM size Visually complete Redirect SSL negotiation
107. Not all pages are created equal
For a typical
ecommerce site,
conversion rate
drops by up to 50%
when “browse”
pages increase from
1 to 6 seconds
108. Not all pages are created equal
However, there is
much less impact
to conversion
when “checkout”
pages degrade
109. What is the Conversion Impact Score?
The Conversion Impact Score (CIS) is a relative score that ranks page groups
by their propensity to negatively impact conversions due to high load times.
For each page group, the Conversion Impact Score is calculated using the
proportion of overall requests that are associated with that group, along with
the Spearman Ranked Correlation between its load times and number of
conversions. The Conversion Impact Score will always be a number between
-1 and 1, though scores much greater than zero should be very rare. The more
negative the score, the more detrimental to conversions that high load times
for that page group are, relative to the other page groups.
110. TL;DR
The Conversion Impact Score answers this question:
How much impact does the performance
of this page have on conversions?
By 2015, GQ found that its average load time had grown to a sluggish 7 seconds.
The solution: a reboot that targeted ads and other third-party tags and features, as well as a migration to a single CMS.
The newly streamlined site reduced server calls by 400% and ultimately cut load times by 80%, down to just under 2 seconds.
83% traffic increase (from 6 million to 11 million unique visitors)
32% increase in time on site (from 5.9 minutes to 7.8 minutes)
108% increase in interaction rate with ads
In this example, I’ve shown the impact of performance (Page Load time) on the Bounce rate for two different groups of sites.
Site A: A collection of user experiences for Specialty Goods eCommerce sites (luxury goods))
Site B: A collection of user experiences for General Merchandise eCommerce sites (commodity goods)
Notice the patience levels of the users after 6s for each site. Users for a specialty goods site with fewer options tend to be more patient. Meanwhile users that have other options for a GM site continue to abandon at the same rate.
The relationship between speed and behavior is even more noticeable when we look at conversion rates between the two sites. Notice how quickly users will decide to abandon a purchase for Site A, versus B.
Another important aspect is the realize all pages are not created equal.
In this study of retail, we found that pages that were at the top of the funnel (browser pages) such as Home, Search, Product were extremely sensitive to user dissatisfaction.
As these pages slowed from 1-6s, we saw a 50% decrease in the conversion rate!
However, when we looked at pages deeper in the funnel like Login, Billing (checkout pages) – users were more patient and committed to the transaction.