LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Ideal web page performance
How to maximize your content view with minimal attention span of your viewers?
Impact of page performance on Business metrics
Profiling a Http request
Browser Architecture, Critical Rendering Path
Applying FFSUx to get optimal webpage performance.
The average size for top 1000 websites is found to be around 1999 KB in 2015 – 40% increase from last year
Modelling Web Performance Optimization - FFSUx
Everything you ever need to know about Front
end Performance Optimization –
Tools, Techniques and Methodology
Haribabu Nandyal (email@example.com)
“Browsing should be as simple
and as fast as turning a page in
“Browsing should be as “simple”
and as “fast” as turning a page in
Performance impact on business metrics
Website Response time
Amazon +100 ms 1% drop in sales
Yahoo +400 ms 5-9% drop in requests
Google +500 ms 20% drop in requests
Bing +2000 ms 4.3% drop in revenue/user
Shopzilla -4800 ms
7-12% increase in revenue
25 % increase in page views
Mozilla -2200 ms 15.4% increase in downloads
Reduced the file volume by
30% increase in map requests
Walmart -100 ms 1% increase revenue
Performance impact on business metrics
• Higher search ranking resulting in increased Customer traffic
• Higher page views.
• Better Conversion Rate resulting in increased Revenue
• Improved Ux and higher Customer Satisfaction
• Retention of Customers
• Reduced cost (Cost Optimization e.g: Infrastructure costs)
Business need for great Performance!
Why the slowdown?
The median ecommerce page contains 99 resources (images, JS, CSS etc).
A year ago, it was 93 resources. Each of this additional resource incurs
latency and adds up to slower load times.
A median page size is 1436 KB which was 1094 KB a year ago. An
increase of 31% in page size.
Increase in average no of requests
and page size
Anatomy of a HTTP Request
Profiling a HTTP Request
85-90% of website rendering
depends on the performance of
Network and Client Side
Client Side Performance
• Earlier website performance was only about optimizing the server-
side and reducing the generation-time of the HTML. But server-
side does not seem to be the main issue.
• Yahoo found out that on an average only 10-20% of the loading
time is spent on the server-side; 80-90% is spent at the client-side
Average loading time of a website
Web Page Generation – Desktop vs
Http measurements of where time is spent in generating a page for
top 50 k sites.
Usual suspects for Performance
Performance bottlenecks because of any/all of these levels:
• Server layer
• Application Architecture/Coding/Design
• Network latency
• Database performance
• Operating Systems
Performance Tuning – zone of focus
User Interface: Address bar ,
back/forward button ,
Browser Engine: “Bridge” between
user interface (UI) and rendering
engine kernel , plug-ins ,
extensions , add-ons
given URL and generates the layout that is displayed in the UI. So HTML parsers,
XML parsers, JS interpreter are the key components of the Rendering Engine.
Data Persistence: Stores data on the client machine. Eg: Cookies, HTML, Cache
Starts receiving the data. First packet is about 14 k bytes.
Parses the HTML and starts constructing the DOM.
Starts downloading of assets (CSS, Images, JS) – in the same
order as specified in the HTML source code.
Parses CSS and constructs CSS OM.
Constructs the Render tree (DOM + CSS OM)
Calculates layout size and position.
Paints and composites the layers.
Submitted the request? - Let us go behind
Parsing of the HTML document is what constructs the Document Object Model
(DOM). In parallel, the CSS Object Model (CSSOM) is constructed from the specified
DOM and CSSOM are then combined to create the "render tree," at which point the
browser has enough information to perform a layout and paint something to the
Critical Rendering Path
Critical Rendering Path
parsing and construction. Similarly, scripts can query for a computed style
Since JS can change the DOM and the CSS OM, when the browser sees a
<script> tag it will block downloading of other assets until the JS has been
downloaded and executed
Tools, Techniques and
Faster content delivery, Fewer in count, Smaller in size,
good User Experience
Faster content delivery: - Reduce Round Trip Time (RTT)
Reduce DNS lookup
Use a CDN (Amazon CloudFront, MaxCDN, Limelight, Akamai)
Prefetch and Postfetch
Domain Sharding and Parallelize requests
Use <link> tag instead of @import
Externalize JS and CSS
Fewer in count: - Reduce no of http requests/CSS/JS/Images
Reduction in number of Round Trips
Remove duplicate scripts
Configure Entity tags
Set expiry dates / Add an expires header
Smaller in size:
Compression of html/CSS/JS (gzip, deflate)
Proper image format (Jpeg, WebP, PNG?)
Minification of html/JS
Time To Interact (TTI)
CSS on top
JS in the bottom
Above the fold rendering
Progressive Image loading
Request without DNS Prefetch
Total Time of request = DNS lookup + Initial Connection + Time to First Byte +
Request with DNS Prefetch
Total time of request = Initial Connection + Time to First Byte + Content Download.
Using DNS prefetch can reduce DNS lookup time by pre resolving DNS (Domain
Name Server).Google PageSpeed has incorporated DNS prefetch (pre resolving
DNS) as one of performance improvement methods.
Typical DNS lookup time is from 20 ms to 120 ms.
Reduce DNS lookups
DNS Server Hierarchy
Most requests are now handled
by the CDN.
So the origin server can handle
greater no of users since each
user is making fewer requests to
the main server
How CDN speeds up resource download?
Preloading of Components
• Preload components you’ll need in the future
• Unconditional preload : Xero login page preloads all core
components so that the dashboard experience is better
• Conditional preload : Often based on where you think a user
might go to next
Parallelize downloads across
Parallelize downloads across
Max connections per
Opera Mini 11
Firefox 27 6
Chrome 34 6
Safari 7.0.1 6
iPad 5 6
iPhone 5 6
Android 4 6
IE 10 8
IE 11 13
When CSS @import is used from an external stylesheet, browser is
unable to download stylesheets in parallel. This results in additional
round-trips to the overall page load. If first.css contains,
The browser must download, parse, and execute first.css before it is
able to discover that it needs to download second.css.
Use <link> tag instead of @import.
<link rel="stylesheet" href="first.css"> <link rel="stylesheet"
This allows the browser to download stylesheets in parallel, which
results in faster page load times.
CSS@Import vs <link>
Reduce HTTP Requests
Repeat the same for CSS:
How Spriting works:
• Finds background images
• Groups images into sprites
• Generates the sprite
• Injects the sprite into the current page
• Recomputes the CSS background-position
• Combines all small images into one large image
• Use CSS to control displaying of each image
• Spriteme.org, Csssprites.com
• Webservers can be configured to send an ETag (entity tag) header in file
responses. This ETag can be a Md5 hash of the file. Browser will send the ETag
from its cache in its next request for the same file.
• The server checks if the ETag is still valid (the file hasn’t changed). If valid
server only sends a 304 code (Not Modified ) without sending the file again. If
the file has changed, the server sends the whole file as usual. With this
technique a lot of traffic can be saved, but there’s still a HTTP to be made.
• Use Expires header to tell the browser how long to keep the resource.
• Browsers won't fetch the resource again until its expiry.
• This is perfect for static files when it’s known that they won’t change
during the time specified in the Expires header.
Impact of Cookies on Response Time
Cookie Size Time Delta
0 bytes 78 ms 0 ms
500 bytes 79 ms +1 ms
1000 bytes 94 ms +16 ms
1500 bytes 109 ms +31 ms
2000 bytes 125 ms +47 ms
2500 bytes 141 ms +63 ms
3000 bytes 156 ms +78 ms
Website Total Cookie
Amazon 60 bytes
Google 72 bytes
Yahoo 122 bytes
CNN 184 bytes
YouTube 218 bytes
MSN 268 bytes
eBay 331 bytes
MySpace 500 bytes
Cookie sizes across websites
Reduce cookie weight
• Cookie’s are sent back with every request
• Keep cookie size small and only store what’s required – use
server-side storage for everything else
• Consider cookie free domains for static content
Keep it clean
• Always asynchronous
• Use JSON over XML
Accessing JSON faster and cheaper
Less overhead compared to XML
• Use GET requests over POSTs wherever possible
POST is a 2-step process: send headers, send body
POST without a body is essentially a GET anyway
• Add an Expires header
Not just for images – should be used on all static content
Set a “Never expire” or far future expires policy if you can
Reduces HTTP requests – once component is served, the browser
never asks for it again
Date stamping in file names makes it easier
“Empty cache” means the browser has to request the components
instead of pulling them from the browser disk cache.
Maximize the cache
Size Size Savings Size Savings
Script 3.3K 1.1K 67% 1.1K 66%
Script 39.7K 14.5K 64% 16.6K 58%
1.0K 0.4K 56% 0.5K 52%
14.1K 3.7K 73% 4.7K 67%
Gzip compresses better and is
supported in more no of
Gzip vs. Deflate
Spot the difference
<img src="photos/awesome_cat.png" width="800">
awesome_cat.png 350 KB
awesome_cat.jpg 80 KB
awesome_cat.webp 60 KB
Note: .webp - supported on chrome/opera/android browsers.
Servers can do UA or Accept Check to send the .jpg or .webp image
based on the browser.
Images are one of the greatest impediments for performance.
Either in the wrong format
Not optimized to load progressively
All of the above.
Minification is the process of removing all unnecessary characters from source
code, without changing its functionality. Includes white spaces, new line characters,
comments, and sometimes block delimiters, which adds readability to the code but
are not required for execution.
Minify all static content
Reduced 24% from the original sizeTools: JSMin, CSSTidy, YUI Compressor
Analysis of the Alexa top 1000 sites found:
42% don't gzip
44% have more than 2 css files
56% serve css from a cookied domain
62% don't minify
31% have more than 100k size css
Top 1000 retail sites
50% aren't doing both keep-alive and compression (the 2
Industry Observations: Alexa
Ten fastest websites based on TTI
Time to Interact (TTI) is a interesting metric for user interaction.
TTI is defined as the point at which a page’s primary content has
been rendered and ready for interaction.
CSS external and on top
● Move all inline CSS to external style sheets for both reuse and caching
● Put all style sheet references at the top
• Promotes better script design
• Push scripts as low as possible
• Scripts will block both downloading and rendering until parsed
• Remove duplicate scripts (IE has a habit of downloading them again)
Tools to Grade performance of a
Time to Start Render
Time to Display
Time to Interact
Time on Site
Pages Per Visit
10 user Engagement Metrics
Yahoo’s YSlow is a plugin for Firebug which can test a
website for many basic optimizations. The tests are based
on the best practices from Yahoo’s Performance team.
Grades performance – not about response times but about
how well a site has adopted the suggested techniques.
Google Page Speed
Google Page Speed is also Firebug plugin based on performance rules. It
also includes minifying of HTML, CSS and JS files.
Google Page Speed
Premature Optimization is the root of all evil
Performance is not a checklist, it is a continuous process
Performance Tuning = Front End + Back End