SlideShare una empresa de Scribd logo
1 de 61
NGINX, Inc. 2017
High Availability Content
Caching with NGINX
Kevin Jones
Technical Solutions Architect
Quick intro to…
• NGINX
Caching with NGINX
• How caching functionality works
• How to enable basic caching
Advanced caching with NGINX
• When and how to enable micro-caching
• How to architect for high availability
• Various configuration tips and tricks
2
Agenda
MORE INFORMATION AT NGINX.COM
Solves Complexity
Load BalancerReverse ProxyWeb Server Content Cache Streaming Media
total sites and counting…
running on NGINX
of the Top 10,000
most visited websites
of all instances on
Amazon Web Services
NGINX Configuration Overview
9
10
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/dirindex.html
http context
server context
events context
main context
stream contextnot shown…
upstream context
location context
11
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
server directive
location directive
upstream directive
events directive
main directive
nginx.org/en/docs/dirindex.html
12
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/dirindex.html
parameter
parameter
parameter
parameter
13
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
upstream api-backends {
server 10.0.1.11:8080;
server 10.0.1.12:8080;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://api-backends;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
customize access log
14
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
map $http_user_agent $dynamic {
“~*Mobile” mobile.example.com;
default desktop.example.com;
}
server {
listen 10.0.1.10:80;
server_name example.com;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location ^~ /api {
proxy_pass http://$dynamic;
}
}
include /path/to/more/virtual_servers/*.conf;
}
nginx.org/en/docs/varindex.html
used later
dynamic variables
15
The Basics of Content Caching
16
Client
initiates request
(e.g. GET /file)
Proxy Cache
determines if response
is already cached if not
proxy cache will fetch from the
origin server Origin Server
serves response
along with all
cache control headers
(e.g. Cache-Control,
Etag, etc..)
Proxy Cache
caches the response
and serves it to the client
17
Cache Headers
• Cache-Control - used to specify directives for caching mechanisms in both, requests and
responses. (e.g. Cache-Control: max-age=600 or Cache-Control: no-cache)
• Expires - contains the date/time after which the response is considered stale. If there is a Cache-
Control header with the "max-age" or "s-max-age" directive in the response, the Expires header is
ignored. (e.g. Expires: Wed, 21 Oct 2015 07:28:00 GMT)
• Last-Modified - contains the date and time at which the origin server believes the resource was last
modified. HTTP dates are always expressed in GMT, never in local time. Less accurate than the
ETag header (e.g. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT)
• ETag - is an identifier (or fingerprint) for a specific version of a resource. (e.g. ETag: “58efdcd0-268")
18
Content Caching with NGINX is Simple
19
proxy_cache_path
proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time]
[max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time]
[loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number]
[purger_sleep=time] [purger_threshold=time];
Syntax:
Default: -
Context: http
Documentation
http {
proxy_cache_path /tmp/nginx/micro_cache/ keys_zone=large_cache:10m
max_size=300g inactive=14d;
...
}
Definition: Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is
a result of applying the MD5 function to the cache key.
20
proxy_cache_key
Documentation
server {
proxy_cache_key $scheme$proxy_host$request_uri$cookie_userid;
...
}
proxy_cache_key string;Syntax:
Default: proxy_cache_key $scheme$proxy_host$request_uri;
Context: http, server, location
Definition: Defines a key for caching. Used in the proxy_cache_path directive.
21
proxy_cache
Documentation
location ^~ /video {
...
proxy_cache large_cache;
}
location ^~ /images {
...
proxy_cache small_cache;
}
proxy_cache zone | off;Syntax:
Default: proxy_cache off;
Context: http, server, location
Definition: Defines a shared memory zone used for caching. The same zone can be used in several places.
22
proxy_cache_valid
Documentation
location ~* .(jpg|png|gif|ico)$ {
...
proxy_cache_valid any 1d;
}
proxy_cache_valid [code ...] time;Syntax:
Default: -
Context: http, server, location
Definition: Sets caching time for different response codes.
23
http {
proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m
max_size=100g inactive=7d use_temp_path=off;
...
server {
...
location / {
...
proxy_pass http://backend.com;
}
location ^~ /images {
...
proxy_cache cache;
proxy_cache_valid 200 301 302 12h;
proxy_pass http://images.origin.com;
}
}
}
Basic Caching
24
Client
Caching with NGINX
Origin Server
Cache Memory Zone
1) HTTP Request:
GET /images/hawaii.jpg
2) NGINX checks if hash exists in memory. If it does
not the request is passed to the origin server.
3) Origin server
responds
4) NGINX caches the response to disk,
places the hash in memory
and response is served to client
Cache Key: http://origin/images/hawaii.jpg
md5 hash: 51b740d1ab03f287d46da45202c84945
25
NGINX Processes
# ps aux | grep nginx
root 14559 0.0 0.1 53308 3360 ? Ss Apr12 0:00 nginx: master process /usr/sbin/nginx
-c /etc/nginx/nginx.conf
nginx 27880 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27881 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process
nginx 27882 0.0 0.1 53472 2876 ? S 00:06 0:00 nginx: cache manager process
nginx 27883 0.0 0.1 53472 2552 ? S 00:06 0:00 nginx: cache loader process
• Cache Manager - activated periodically to check the state of the cache. If the cache size
exceeds the limit set by the max_size parameter to the proxy_cache_path directive, the
cache manager removes the data that was accessed least recently, as well as the
cache considered inactive.
• Cache Loader - runs only once, right after NGINX starts. It loads metadata about
previously cached data into the shared memory zone.
26
Caching is Not Just for HTTP
HTTP
FastCGI
UWSGI
SCGI
Tip: NGINX can also be used to cache other backends using their unique cache directives.
(e.g. fastcgi_cache, uwsgi_cache and scgi_cache)
Alternatively, NGINX can also be used to retrieve content directly from a memcached server.
Memcached
27
Micro-Caching
28
Static Content
• Images
• CSS
• Simple HTML
User Content
• Shopping Cart
• Unique Data
• Account Data
Dynamic Content
• Blog Posts
• Status
• API Data (Maybe?)
Easy to cache Cannot CacheMicro-cacheable!
Types of Content
Documentation
29
http {
upstream backend {
keepalive 20;
server 127.0.0.1:8080;
}
proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m
max_size=100m inactive=600s;
...
server {
listen 80;
...
proxy_cache micro_cache;
proxy_cache_valid any 1s;
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Accept-Encoding "";
proxy_pass http://backend;
}
}
}
Enable keepalives on upstream
Set proxy_cache_valid to any
status with a 1 second value
Set required HTTP version and
pass HTTP headers for keepalives
Set short inactive parameter
30
Final Touches
31
proxy_cache_background_update
Documentation
location / {
...
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_use_stale updating;
}
proxy_cache_background_update on | off;Syntax:
Default: proxy_cache_background_update off;
Context: http, server, location
Definition: Allows starting a background subrequest to update an expired cache item, while a stale cached response
is returned to the client. Note that it is necessary to allow the usage of a stale cached response when it is
being updated.
32
proxy_cache_lock
Documentation
proxy_cache_lock on | off;Syntax:
Default: proxy_cache_lock off;
Context: http, server, location
Definition: When enabled, only one request at a time will be allowed to populate a new cache element identified
according to the proxy_cache_key directive by passing a request to a proxied server.
Other requests of the same cache element will either wait for a response to appear in the cache or the
cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive.
Related: See the following for tuning…
• proxy_cache_lock_age,
• proxy_cache_lock_timeout
33
proxy_cache_use_stale
Documentation
location /contact-us {
...
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
}
proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 |
http_504 | http_403 | http_404 | http_429 | off ...;
Syntax:
Default: proxy_cache_use_stale off;
Context: http, server, location
Definition: Determines in which cases a stale cached response can be used during communication with the proxied
server.
34
http {
upstream backend {
keepalive 20;
server 127.0.0.1:8080;
}
proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m
max_size=100m inactive=600s;
...
server {
listen 80;
...
proxy_cache micro_cache;
proxy_cache_valid any 1s;
proxy_cache_background_update on;
proxy_cache_lock on;
proxy_cache_use_stale updating;
location / {
...
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Accept-Encoding "";
proxy_pass http://backend;
}
}
}
Final optimization
35
Further Tuning and Optimization
36
proxy_cache_revalidate
Documentation
proxy_cache_revalidate on | off;Syntax:
Default: proxy_cache_revalidate off;
Context: http, server, location
Definition: Enables revalidation of expired cache items using conditional GET requests with the “If-Modified-Since”
and “If-None-Match” header fields.
Last-Modified: Wed, 21 Oct 2015 07:28:00 GMTIf-Modified-Since: Wed, 21 Oct 2015 07:28:00 GMT
ETag: “686897696a7c876b7e”If-None-Match: “686897696a7c876b7e"
Proxy Cache [NGINX] Origin Server
37
proxy_cache_min_uses
Documentation
location ~* /legacy {
...
proxy_cache_min_uses 5;
}
proxy_cache_min_uses number;Syntax:
Default: proxy_cache_min_uses 1;
Context: http, server, location
Definition: Sets the number of requests after which the response will be cached. This will help with disk utilization and
hit ratio of your cache.
38
proxy_cache_methods
Documentation
location ~* /data {
...
proxy_cache_methods GET HEAD POST;
}
proxy_cache_methods GET | HEAD | POST …;Syntax:
Default: proxy_cache_methods GET HEAD;
Context: http, server, location
Definition: NGINX only caches GET and HEAD request methods by default. Using this directive you can add
additional methods.
If you plan to add additional methods consider updating the cache key to include the $request_method
variable if the response will be different depending on the request method.
39
location ^~ /wordpress {
...
proxy_cache cache;
proxy_ignore_headers Cache-Control;
}
Override Cache-Control Headers
Tip: By default NGINX will honor all Cache-Control headers from the origin server, in turn not caching
responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response
header.
Using proxy_ignore_headers you can disable processing of certain response header fields from the
proxied server.
40
location / {
...
proxy_cache cache;
proxy_cache_bypass $cookie_nocache $arg_nocache $http_nocache;
}
Can I Punch Through the Cache?
Tip: If you want to disregard the cache and go strait to the origin for a response, you can use the
proxy_cache_bypass directive.
41
proxy_cache_purge
Documentation
proxy_cache_purge string ...;Syntax:
Default: -
Context: http, server, location
Definition: Defines conditions under which the request will be considered a cache purge request. If at least one value
of the string parameters is not empty and is not equal to “0” then the cache entry with a corresponding
cache key is removed.
The result of successful operation is indicated by returning the 204 (No Content) response.
Note: NGINX Plus only feature
42
proxy_cache_path /tmp/cache keys_zone=mycache:10m levels=1:2 inactive=60s;
map $request_method $purge_method {
PURGE 1;
default 0;
}
server {
listen 80;
server_name www.example.com;
location / {
proxy_pass http://localhost:8002;
proxy_cache mycache;
proxy_cache_purge $purge_method;
}
}
Example Cache Purge Configuration
Tip: Using NGINX Plus, you can issue unique request methods to invalidate the cache
dynamically set a variable
used later in the configuration
43
Architecting for High Availability
44
Two Approaches
• Sharded (High Capacity)
• Shared (Replicated)
45
Shared Cache Clustering
Tip: If your primary goal is to achieve high availability while minimizing load on the origin servers, this scenario
provides a highly available shared cache. HA cluster should be Active/Passive configuration.
46
and Failover
Tip: In the event of a failover there is no loss in cache and the origin does not suffer unneeded proxy requests.
47
proxy_cache_path /tmp/mycache keys_zone=mycache:10m;
server {
listen 80;
proxy_cache mycache;
proxy_cache_valid 200 15s;
location / {
proxy_pass http://secondary;
}
}
upstream secondary {
server 192.168.56.11; # secondary
server 192.168.56.12 backup; # origin
}
Primary Cache Server
48
proxy_cache_path /tmp/mycache keys_zone=mycache:10m;
server {
listen 80;
proxy_cache mycache;
proxy_cache_valid 200 15s;
location / {
proxy_pass http://origin;
}
}
upstream origin {
server 192.168.56.12; # origin
}
Secondary Cache Server
49
Sharding Your Cache
Tip: If your primary goal is to create a very high-capacity cache, shard (partition) your cache across multiple
servers. This in turn maximizes the resources you have while minimizing impact on your origin servers
depending on the amount of cache servers in your cache tier.
50
upstream cache_servers {
hash $scheme$proxy_host$request_uri consistent;
server prod.cache1.host;
server prod.cache2.host;
server prod.cache3.host;
server prod.cache4.host;
}
Hash Load Balancing
Tip: Using the hash load balancing algorithm, we can specify the proxy cache key. This allows each resource to
be cached on only one backend server.
51
Combined Load Balancer and Cache
Tip: Alternatively, It is possible to consolidate the load balancer and cache tier into one with the use of a
various NGINX directives and parameters.
52
Multi-Tier with “Hot Cache”
Tip: If needed, a “Hot Cache Tier” can be enabled on the load balancer layer which will give you the same high
capacity cache and provide a high availability of specific cached resources.
53
Initial… Tips and Tricks!
54
log_format main 'rid="$request_id" pck="$scheme://$proxy_host$request_uri" '
'ucs="$upstream_cache_status" '
'site="$server_name" server="$host” dest_port="$server_port" '
'dest_ip="$server_addr" src="$remote_addr" src_ip="$realip_remote_addr" '
'user="$remote_user" time_local="$time_local" protocol="$server_protocol" '
'status="$status" bytes_out="$bytes_sent" '
'bytes_in="$upstream_bytes_received" http_referer="$http_referer" '
'http_user_agent="$http_user_agent" nginx_version="$nginx_version" '
'http_x_forwarded_for="$http_x_forwarded_for" '
'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" '
'http_method="$request_method" response_time="$upstream_response_time" '
'cookie="$http_cookie" request_time="$request_time" ';
Logging is Your Friend
Tip: The more relevant information in your log the better. When troubleshooting you can easily add the proxy
cache KEY to the log_format for debugging. For a list of all variables see the “Alphabetical index of
variables” on nginx.org.
55
server {
...
# add HTTP response headers
add_header CC-X-Request-ID $request_id;
add_header X-Cache-Status $upstream_cache_status;
}
Add Response Headers
Tip: Using the add_header directive you can add useful HTTP response headers allowing you to debug
your NGINX deployment rather easily.
56
# curl -I 127.0.0.1/images/hawaii.jpg
HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Wed, 19 Apr 2017 22:20:53 GMT
Content-Type: image/jpeg
Content-Length: 21542868
Connection: keep-alive
Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT
ETag: "58efe5ab-148b7d4"
OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1
CC-X-Request-ID: d4a5f7a8d25544b1409c351a22f42960
X-Cache-Status: HIT
Accept-Ranges: bytes
Using cURL to Debug…
Tip: Use cURL or Chrome developer tools to grab the request ID or other various headers useful for debugging.
57
# grep -ri d4a5f7a8d25544b1409c351a22f42960 /var/log/nginx/adv_access.log
rid="d4a5f7a8d25544b1409c351a22f42960" pck="http://origin/images/hawaii.jpg" site="webopsx.com"
server="localhost” dest_port="80" dest_ip=“127.0.0.1" ...
# echo -n "http://origin/images/hawaii.jpg" | md5sum
51b740d1ab03f287d46da45202c84945 -
# tree /tmp/nginx/micro_cache/5/94/
/tmp/nginx/micro_cache/5/94/
└── 51b740d1ab03f287d46da45202c84945
0 directories, 1 file
Troubleshooting the Proxy Cache
Tip: A quick and easy way to determine the hash of your cache key can be accomplished using echo, pipe and
md5sum
58
# head -n 14 /tmp/nginx/micro_cache/5/94/51b740d1ab03f287d46da45202c84945
??X?X??Xb?!bv?"58efe5ab-148b7d4"
KEY: http://origin/images/hawaii.jpg
HTTP/1.1 200 OK
Server: nginx/1.11.10
Date: Wed, 19 Apr 2017 23:51:38 GMT
Content-Type: image/jpeg
Content-Length: 21542868
Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT
Connection: keep-alive
ETag: "58efe5ab-148b7d4"
OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1
Accept-Ranges: bytes
?wExifII>(i?Nl?0230??HH??
Cache Contents
59
Questions?
Thank You
60
https://www.nginx.com/blog/author/kjones/
@webopsx
Kevin Jones
Technical Solutions Architect
NGINX Inc.
https://www.slideshare.net/KevinJones62
61
Want more experience with NGINX caching?
• Online Courses – university.nginx.com/instructor-led-training/nginx-plus-
advanced-caching
• NGINX Plus 30-Day Trial – nginx.com/free-trial-request

Más contenido relacionado

La actualidad más candente

NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX, Inc.
 
Learn nginx in 90mins
Learn nginx in 90minsLearn nginx in 90mins
Learn nginx in 90minsLarry Cai
 
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyNginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyAmit Aggarwal
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and TuningNGINX, Inc.
 
NGINX ADC: Basics and Best Practices
NGINX ADC: Basics and Best PracticesNGINX ADC: Basics and Best Practices
NGINX ADC: Basics and Best PracticesNGINX, Inc.
 
Nginx internals
Nginx internalsNginx internals
Nginx internalsliqiang xu
 
Load Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXLoad Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXNGINX, Inc.
 
PostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized WorldPostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized WorldJignesh Shah
 
Patroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyPatroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyAlexander Kukushkin
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could dosarahnovotny
 
MySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & GrafanaMySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & GrafanaYoungHeon (Roy) Kim
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep DiveAkihiro Suda
 
Introduction to Nginx
Introduction to NginxIntroduction to Nginx
Introduction to NginxKnoldus Inc.
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX, Inc.
 
NGINX High-performance Caching
NGINX High-performance CachingNGINX High-performance Caching
NGINX High-performance CachingNGINX, Inc.
 
Best Practices for Getting Started with NGINX Open Source
Best Practices for Getting Started with NGINX Open SourceBest Practices for Getting Started with NGINX Open Source
Best Practices for Getting Started with NGINX Open SourceNGINX, Inc.
 
Best practices for ansible
Best practices for ansibleBest practices for ansible
Best practices for ansibleGeorge Shuklin
 
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)Brian Brazil
 

La actualidad más candente (20)

NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
 
Learn nginx in 90mins
Learn nginx in 90minsLearn nginx in 90mins
Learn nginx in 90mins
 
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse ProxyNginx A High Performance Load Balancer, Web Server & Reverse Proxy
Nginx A High Performance Load Balancer, Web Server & Reverse Proxy
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and Tuning
 
Nginx Essential
Nginx EssentialNginx Essential
Nginx Essential
 
Nginx dhruba mandal
Nginx dhruba mandalNginx dhruba mandal
Nginx dhruba mandal
 
NGINX ADC: Basics and Best Practices
NGINX ADC: Basics and Best PracticesNGINX ADC: Basics and Best Practices
NGINX ADC: Basics and Best Practices
 
Nginx internals
Nginx internalsNginx internals
Nginx internals
 
Load Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXLoad Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINX
 
PostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized WorldPostgreSQL High Availability in a Containerized World
PostgreSQL High Availability in a Containerized World
 
Patroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easyPatroni - HA PostgreSQL made easy
Patroni - HA PostgreSQL made easy
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could do
 
MySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & GrafanaMySQL Monitoring using Prometheus & Grafana
MySQL Monitoring using Prometheus & Grafana
 
[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive[KubeCon EU 2020] containerd Deep Dive
[KubeCon EU 2020] containerd Deep Dive
 
Introduction to Nginx
Introduction to NginxIntroduction to Nginx
Introduction to Nginx
 
NGINX: High Performance Load Balancing
NGINX: High Performance Load BalancingNGINX: High Performance Load Balancing
NGINX: High Performance Load Balancing
 
NGINX High-performance Caching
NGINX High-performance CachingNGINX High-performance Caching
NGINX High-performance Caching
 
Best Practices for Getting Started with NGINX Open Source
Best Practices for Getting Started with NGINX Open SourceBest Practices for Getting Started with NGINX Open Source
Best Practices for Getting Started with NGINX Open Source
 
Best practices for ansible
Best practices for ansibleBest practices for ansible
Best practices for ansible
 
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
Monitoring Hadoop with Prometheus (Hadoop User Group Ireland, December 2015)
 

Similar a High Availability Content Caching with NGINX

High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINXKevin Jones
 
ITB2017 - Nginx Effective High Availability Content Caching
ITB2017 - Nginx Effective High Availability Content CachingITB2017 - Nginx Effective High Availability Content Caching
ITB2017 - Nginx Effective High Availability Content CachingOrtus Solutions, Corp
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesOrtus Solutions, Corp
 
Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisationgrooverdan
 
PHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxPHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxHarald Zeitlhofer
 
WordPress + NGINX Best Practices with EasyEngine
WordPress + NGINX Best Practices with EasyEngineWordPress + NGINX Best Practices with EasyEngine
WordPress + NGINX Best Practices with EasyEngineNGINX, Inc.
 
Where is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleWhere is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleRafał Leszko
 
Less and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersLess and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersSeravo
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterKevin Jones
 
Automating complex infrastructures with Puppet
Automating complex infrastructures with PuppetAutomating complex infrastructures with Puppet
Automating complex infrastructures with PuppetKris Buytaert
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Dockersarahnovotny
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more DockerSarah Novotny
 
What's new in NGINX Plus R19
What's new in NGINX Plus R19What's new in NGINX Plus R19
What's new in NGINX Plus R19NGINX, Inc.
 
X64服务器 lnmp服务器部署标准 new
X64服务器 lnmp服务器部署标准 newX64服务器 lnmp服务器部署标准 new
X64服务器 lnmp服务器部署标准 newYiwei Ma
 
Nginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressNginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressKnoldus Inc.
 
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...Nagios
 
Django and Nginx reverse proxy cache
Django and Nginx reverse proxy cacheDjango and Nginx reverse proxy cache
Django and Nginx reverse proxy cacheAnton Pirker
 
Automating Complex Setups with Puppet
Automating Complex Setups with PuppetAutomating Complex Setups with Puppet
Automating Complex Setups with PuppetKris Buytaert
 

Similar a High Availability Content Caching with NGINX (20)

High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINX
 
ITB2017 - Nginx Effective High Availability Content Caching
ITB2017 - Nginx Effective High Availability Content CachingITB2017 - Nginx Effective High Availability Content Caching
ITB2017 - Nginx Effective High Availability Content Caching
 
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin JonesITB2019 NGINX Overview and Technical Aspects - Kevin Jones
ITB2019 NGINX Overview and Technical Aspects - Kevin Jones
 
Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation
 
PHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on NginxPHP conference Berlin 2015: running PHP on Nginx
PHP conference Berlin 2015: running PHP on Nginx
 
WordPress + NGINX Best Practices with EasyEngine
WordPress + NGINX Best Practices with EasyEngineWordPress + NGINX Best Practices with EasyEngine
WordPress + NGINX Best Practices with EasyEngine
 
Where is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by exampleWhere is my cache architectural patterns for caching microservices by example
Where is my cache architectural patterns for caching microservices by example
 
Less and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developersLess and faster – Cache tips for WordPress developers
Less and faster – Cache tips for WordPress developers
 
Load Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS ClusterLoad Balancing Applications with NGINX in a CoreOS Cluster
Load Balancing Applications with NGINX in a CoreOS Cluster
 
Automating complex infrastructures with Puppet
Automating complex infrastructures with PuppetAutomating complex infrastructures with Puppet
Automating complex infrastructures with Puppet
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
 
NGINX 101 - now with more Docker
NGINX 101 - now with more DockerNGINX 101 - now with more Docker
NGINX 101 - now with more Docker
 
What's new in NGINX Plus R19
What's new in NGINX Plus R19What's new in NGINX Plus R19
What's new in NGINX Plus R19
 
X64服务器 lnmp服务器部署标准 new
X64服务器 lnmp服务器部署标准 newX64服务器 lnmp服务器部署标准 new
X64服务器 lnmp服务器部署标准 new
 
Nginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressNginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes Ingress
 
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...
 
Nginx + PHP
Nginx + PHPNginx + PHP
Nginx + PHP
 
Django and Nginx reverse proxy cache
Django and Nginx reverse proxy cacheDjango and Nginx reverse proxy cache
Django and Nginx reverse proxy cache
 
Wckansai 2014
Wckansai 2014Wckansai 2014
Wckansai 2014
 
Automating Complex Setups with Puppet
Automating Complex Setups with PuppetAutomating Complex Setups with Puppet
Automating Complex Setups with Puppet
 

Más de NGINX, Inc.

【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法
【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法
【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法NGINX, Inc.
 
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナーNGINX, Inc.
 
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法NGINX, Inc.
 
Get Hands-On with NGINX and QUIC+HTTP/3
Get Hands-On with NGINX and QUIC+HTTP/3Get Hands-On with NGINX and QUIC+HTTP/3
Get Hands-On with NGINX and QUIC+HTTP/3NGINX, Inc.
 
Managing Kubernetes Cost and Performance with NGINX & Kubecost
Managing Kubernetes Cost and Performance with NGINX & KubecostManaging Kubernetes Cost and Performance with NGINX & Kubecost
Managing Kubernetes Cost and Performance with NGINX & KubecostNGINX, Inc.
 
Manage Microservices Chaos and Complexity with Observability
Manage Microservices Chaos and Complexity with ObservabilityManage Microservices Chaos and Complexity with Observability
Manage Microservices Chaos and Complexity with ObservabilityNGINX, Inc.
 
Accelerate Microservices Deployments with Automation
Accelerate Microservices Deployments with AutomationAccelerate Microservices Deployments with Automation
Accelerate Microservices Deployments with AutomationNGINX, Inc.
 
Unit 2: Microservices Secrets Management 101
Unit 2: Microservices Secrets Management 101Unit 2: Microservices Secrets Management 101
Unit 2: Microservices Secrets Management 101NGINX, Inc.
 
Unit 1: Apply the Twelve-Factor App to Microservices Architectures
Unit 1: Apply the Twelve-Factor App to Microservices ArchitecturesUnit 1: Apply the Twelve-Factor App to Microservices Architectures
Unit 1: Apply the Twelve-Factor App to Microservices ArchitecturesNGINX, Inc.
 
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!NGINX, Inc.
 
Easily View, Manage, and Scale Your App Security with F5 NGINX
Easily View, Manage, and Scale Your App Security with F5 NGINXEasily View, Manage, and Scale Your App Security with F5 NGINX
Easily View, Manage, and Scale Your App Security with F5 NGINXNGINX, Inc.
 
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!NGINX, Inc.
 
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINX
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINXKeep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINX
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINXNGINX, Inc.
 
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...NGINX, Inc.
 
Protecting Apps from Hacks in Kubernetes with NGINX
Protecting Apps from Hacks in Kubernetes with NGINXProtecting Apps from Hacks in Kubernetes with NGINX
Protecting Apps from Hacks in Kubernetes with NGINXNGINX, Inc.
 
NGINX Kubernetes API
NGINX Kubernetes APINGINX Kubernetes API
NGINX Kubernetes APINGINX, Inc.
 
Successfully Implement Your API Strategy with NGINX
Successfully Implement Your API Strategy with NGINXSuccessfully Implement Your API Strategy with NGINX
Successfully Implement Your API Strategy with NGINXNGINX, Inc.
 
Installing and Configuring NGINX Open Source
Installing and Configuring NGINX Open SourceInstalling and Configuring NGINX Open Source
Installing and Configuring NGINX Open SourceNGINX, Inc.
 
Shift Left for More Secure Apps with F5 NGINX
Shift Left for More Secure Apps with F5 NGINXShift Left for More Secure Apps with F5 NGINX
Shift Left for More Secure Apps with F5 NGINXNGINX, Inc.
 
How to Avoid the Top 5 NGINX Configuration Mistakes.pptx
How to Avoid the Top 5 NGINX Configuration Mistakes.pptxHow to Avoid the Top 5 NGINX Configuration Mistakes.pptx
How to Avoid the Top 5 NGINX Configuration Mistakes.pptxNGINX, Inc.
 

Más de NGINX, Inc. (20)

【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法
【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法
【NGINXセミナー】 Ingressを使ってマイクロサービスの運用を楽にする方法
 
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー
【NGINXセミナー】 NGINXのWAFとは?その使い方と設定方法 解説セミナー
 
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法
【NGINXセミナー】API ゲートウェイとしてのNGINX Plus活用方法
 
Get Hands-On with NGINX and QUIC+HTTP/3
Get Hands-On with NGINX and QUIC+HTTP/3Get Hands-On with NGINX and QUIC+HTTP/3
Get Hands-On with NGINX and QUIC+HTTP/3
 
Managing Kubernetes Cost and Performance with NGINX & Kubecost
Managing Kubernetes Cost and Performance with NGINX & KubecostManaging Kubernetes Cost and Performance with NGINX & Kubecost
Managing Kubernetes Cost and Performance with NGINX & Kubecost
 
Manage Microservices Chaos and Complexity with Observability
Manage Microservices Chaos and Complexity with ObservabilityManage Microservices Chaos and Complexity with Observability
Manage Microservices Chaos and Complexity with Observability
 
Accelerate Microservices Deployments with Automation
Accelerate Microservices Deployments with AutomationAccelerate Microservices Deployments with Automation
Accelerate Microservices Deployments with Automation
 
Unit 2: Microservices Secrets Management 101
Unit 2: Microservices Secrets Management 101Unit 2: Microservices Secrets Management 101
Unit 2: Microservices Secrets Management 101
 
Unit 1: Apply the Twelve-Factor App to Microservices Architectures
Unit 1: Apply the Twelve-Factor App to Microservices ArchitecturesUnit 1: Apply the Twelve-Factor App to Microservices Architectures
Unit 1: Apply the Twelve-Factor App to Microservices Architectures
 
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!
NGINX基本セミナー(セキュリティ編)~NGINXでセキュアなプラットフォームを実現する方法!
 
Easily View, Manage, and Scale Your App Security with F5 NGINX
Easily View, Manage, and Scale Your App Security with F5 NGINXEasily View, Manage, and Scale Your App Security with F5 NGINX
Easily View, Manage, and Scale Your App Security with F5 NGINX
 
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!
NGINXセミナー(基本編)~いまさら聞けないNGINXコンフィグなど基本がわかる!
 
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINX
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINXKeep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINX
Keep Ahead of Evolving Cyberattacks with OPSWAT and F5 NGINX
 
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...
Install and Configure NGINX Unit, the Universal Application, Web, and Proxy S...
 
Protecting Apps from Hacks in Kubernetes with NGINX
Protecting Apps from Hacks in Kubernetes with NGINXProtecting Apps from Hacks in Kubernetes with NGINX
Protecting Apps from Hacks in Kubernetes with NGINX
 
NGINX Kubernetes API
NGINX Kubernetes APINGINX Kubernetes API
NGINX Kubernetes API
 
Successfully Implement Your API Strategy with NGINX
Successfully Implement Your API Strategy with NGINXSuccessfully Implement Your API Strategy with NGINX
Successfully Implement Your API Strategy with NGINX
 
Installing and Configuring NGINX Open Source
Installing and Configuring NGINX Open SourceInstalling and Configuring NGINX Open Source
Installing and Configuring NGINX Open Source
 
Shift Left for More Secure Apps with F5 NGINX
Shift Left for More Secure Apps with F5 NGINXShift Left for More Secure Apps with F5 NGINX
Shift Left for More Secure Apps with F5 NGINX
 
How to Avoid the Top 5 NGINX Configuration Mistakes.pptx
How to Avoid the Top 5 NGINX Configuration Mistakes.pptxHow to Avoid the Top 5 NGINX Configuration Mistakes.pptx
How to Avoid the Top 5 NGINX Configuration Mistakes.pptx
 

Último

Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024Mind IT Systems
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech studentsHimanshiGarg82
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionSolGuruz
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfVishalKumarJha10
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxComplianceQuest1
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplatePresentation.STUDIO
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdfPearlKirahMaeRagusta1
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...kalichargn70th171
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...panagenda
 

Último (20)

CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students8257 interfacing 2 in microprocessor for btech students
8257 interfacing 2 in microprocessor for btech students
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdfintroduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
introduction-to-automotive Andoid os-csimmonds-ndctechtown-2021.pdf
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
AI & Machine Learning Presentation Template
AI & Machine Learning Presentation TemplateAI & Machine Learning Presentation Template
AI & Machine Learning Presentation Template
 
Define the academic and professional writing..pdf
Define the academic and professional writing..pdfDefine the academic and professional writing..pdf
Define the academic and professional writing..pdf
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 

High Availability Content Caching with NGINX

  • 1. NGINX, Inc. 2017 High Availability Content Caching with NGINX Kevin Jones Technical Solutions Architect
  • 2. Quick intro to… • NGINX Caching with NGINX • How caching functionality works • How to enable basic caching Advanced caching with NGINX • When and how to enable micro-caching • How to architect for high availability • Various configuration tips and tricks 2 Agenda
  • 3.
  • 4. MORE INFORMATION AT NGINX.COM Solves Complexity Load BalancerReverse ProxyWeb Server Content Cache Streaming Media
  • 5. total sites and counting… running on NGINX
  • 6. of the Top 10,000 most visited websites
  • 7. of all instances on Amazon Web Services
  • 8.
  • 10. 10 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/dirindex.html http context server context events context main context stream contextnot shown… upstream context location context
  • 11. 11 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } server directive location directive upstream directive events directive main directive nginx.org/en/docs/dirindex.html
  • 12. 12 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/dirindex.html parameter parameter parameter parameter
  • 13. 13 user nginx; worker_processes auto; error_log /var/log/nginx/error.log notice; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; upstream api-backends { server 10.0.1.11:8080; server 10.0.1.12:8080; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://api-backends; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html customize access log
  • 14. 14 http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; map $http_user_agent $dynamic { “~*Mobile” mobile.example.com; default desktop.example.com; } server { listen 10.0.1.10:80; server_name example.com; location / { root /usr/share/nginx/html; index index.html index.htm; } location ^~ /api { proxy_pass http://$dynamic; } } include /path/to/more/virtual_servers/*.conf; } nginx.org/en/docs/varindex.html used later dynamic variables
  • 15. 15 The Basics of Content Caching
  • 16. 16 Client initiates request (e.g. GET /file) Proxy Cache determines if response is already cached if not proxy cache will fetch from the origin server Origin Server serves response along with all cache control headers (e.g. Cache-Control, Etag, etc..) Proxy Cache caches the response and serves it to the client
  • 17. 17 Cache Headers • Cache-Control - used to specify directives for caching mechanisms in both, requests and responses. (e.g. Cache-Control: max-age=600 or Cache-Control: no-cache) • Expires - contains the date/time after which the response is considered stale. If there is a Cache- Control header with the "max-age" or "s-max-age" directive in the response, the Expires header is ignored. (e.g. Expires: Wed, 21 Oct 2015 07:28:00 GMT) • Last-Modified - contains the date and time at which the origin server believes the resource was last modified. HTTP dates are always expressed in GMT, never in local time. Less accurate than the ETag header (e.g. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMT) • ETag - is an identifier (or fingerprint) for a specific version of a resource. (e.g. ETag: “58efdcd0-268")
  • 18. 18 Content Caching with NGINX is Simple
  • 19. 19 proxy_cache_path proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time] [max_size=size] [manager_files=number] [manager_sleep=time] [manager_threshold=time] [loader_files=number] [loader_sleep=time] [loader_threshold=time] [purger=on|off] [purger_files=number] [purger_sleep=time] [purger_threshold=time]; Syntax: Default: - Context: http Documentation http { proxy_cache_path /tmp/nginx/micro_cache/ keys_zone=large_cache:10m max_size=300g inactive=14d; ... } Definition: Sets the path and other parameters of a cache. Cache data are stored in files. The file name in a cache is a result of applying the MD5 function to the cache key.
  • 20. 20 proxy_cache_key Documentation server { proxy_cache_key $scheme$proxy_host$request_uri$cookie_userid; ... } proxy_cache_key string;Syntax: Default: proxy_cache_key $scheme$proxy_host$request_uri; Context: http, server, location Definition: Defines a key for caching. Used in the proxy_cache_path directive.
  • 21. 21 proxy_cache Documentation location ^~ /video { ... proxy_cache large_cache; } location ^~ /images { ... proxy_cache small_cache; } proxy_cache zone | off;Syntax: Default: proxy_cache off; Context: http, server, location Definition: Defines a shared memory zone used for caching. The same zone can be used in several places.
  • 22. 22 proxy_cache_valid Documentation location ~* .(jpg|png|gif|ico)$ { ... proxy_cache_valid any 1d; } proxy_cache_valid [code ...] time;Syntax: Default: - Context: http, server, location Definition: Sets caching time for different response codes.
  • 23. 23 http { proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=cache:10m max_size=100g inactive=7d use_temp_path=off; ... server { ... location / { ... proxy_pass http://backend.com; } location ^~ /images { ... proxy_cache cache; proxy_cache_valid 200 301 302 12h; proxy_pass http://images.origin.com; } } } Basic Caching
  • 24. 24 Client Caching with NGINX Origin Server Cache Memory Zone 1) HTTP Request: GET /images/hawaii.jpg 2) NGINX checks if hash exists in memory. If it does not the request is passed to the origin server. 3) Origin server responds 4) NGINX caches the response to disk, places the hash in memory and response is served to client Cache Key: http://origin/images/hawaii.jpg md5 hash: 51b740d1ab03f287d46da45202c84945
  • 25. 25 NGINX Processes # ps aux | grep nginx root 14559 0.0 0.1 53308 3360 ? Ss Apr12 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 27880 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process nginx 27881 0.0 0.1 53692 2724 ? S 00:06 0:00 nginx: worker process nginx 27882 0.0 0.1 53472 2876 ? S 00:06 0:00 nginx: cache manager process nginx 27883 0.0 0.1 53472 2552 ? S 00:06 0:00 nginx: cache loader process • Cache Manager - activated periodically to check the state of the cache. If the cache size exceeds the limit set by the max_size parameter to the proxy_cache_path directive, the cache manager removes the data that was accessed least recently, as well as the cache considered inactive. • Cache Loader - runs only once, right after NGINX starts. It loads metadata about previously cached data into the shared memory zone.
  • 26. 26 Caching is Not Just for HTTP HTTP FastCGI UWSGI SCGI Tip: NGINX can also be used to cache other backends using their unique cache directives. (e.g. fastcgi_cache, uwsgi_cache and scgi_cache) Alternatively, NGINX can also be used to retrieve content directly from a memcached server. Memcached
  • 28. 28 Static Content • Images • CSS • Simple HTML User Content • Shopping Cart • Unique Data • Account Data Dynamic Content • Blog Posts • Status • API Data (Maybe?) Easy to cache Cannot CacheMicro-cacheable! Types of Content Documentation
  • 29. 29 http { upstream backend { keepalive 20; server 127.0.0.1:8080; } proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m max_size=100m inactive=600s; ... server { listen 80; ... proxy_cache micro_cache; proxy_cache_valid any 1s; location / { proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Accept-Encoding ""; proxy_pass http://backend; } } } Enable keepalives on upstream Set proxy_cache_valid to any status with a 1 second value Set required HTTP version and pass HTTP headers for keepalives Set short inactive parameter
  • 31. 31 proxy_cache_background_update Documentation location / { ... proxy_cache_background_update on; proxy_cache_lock on; proxy_cache_use_stale updating; } proxy_cache_background_update on | off;Syntax: Default: proxy_cache_background_update off; Context: http, server, location Definition: Allows starting a background subrequest to update an expired cache item, while a stale cached response is returned to the client. Note that it is necessary to allow the usage of a stale cached response when it is being updated.
  • 32. 32 proxy_cache_lock Documentation proxy_cache_lock on | off;Syntax: Default: proxy_cache_lock off; Context: http, server, location Definition: When enabled, only one request at a time will be allowed to populate a new cache element identified according to the proxy_cache_key directive by passing a request to a proxied server. Other requests of the same cache element will either wait for a response to appear in the cache or the cache lock for this element to be released, up to the time set by the proxy_cache_lock_timeout directive. Related: See the following for tuning… • proxy_cache_lock_age, • proxy_cache_lock_timeout
  • 33. 33 proxy_cache_use_stale Documentation location /contact-us { ... proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; } proxy_cache_use_stale error | timeout | invalid_header | updating | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429 | off ...; Syntax: Default: proxy_cache_use_stale off; Context: http, server, location Definition: Determines in which cases a stale cached response can be used during communication with the proxied server.
  • 34. 34 http { upstream backend { keepalive 20; server 127.0.0.1:8080; } proxy_cache_path /var/nginx/micro_cache levels=1:2 keys_zone=micro_cache:10m max_size=100m inactive=600s; ... server { listen 80; ... proxy_cache micro_cache; proxy_cache_valid any 1s; proxy_cache_background_update on; proxy_cache_lock on; proxy_cache_use_stale updating; location / { ... proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Accept-Encoding ""; proxy_pass http://backend; } } } Final optimization
  • 35. 35 Further Tuning and Optimization
  • 36. 36 proxy_cache_revalidate Documentation proxy_cache_revalidate on | off;Syntax: Default: proxy_cache_revalidate off; Context: http, server, location Definition: Enables revalidation of expired cache items using conditional GET requests with the “If-Modified-Since” and “If-None-Match” header fields. Last-Modified: Wed, 21 Oct 2015 07:28:00 GMTIf-Modified-Since: Wed, 21 Oct 2015 07:28:00 GMT ETag: “686897696a7c876b7e”If-None-Match: “686897696a7c876b7e" Proxy Cache [NGINX] Origin Server
  • 37. 37 proxy_cache_min_uses Documentation location ~* /legacy { ... proxy_cache_min_uses 5; } proxy_cache_min_uses number;Syntax: Default: proxy_cache_min_uses 1; Context: http, server, location Definition: Sets the number of requests after which the response will be cached. This will help with disk utilization and hit ratio of your cache.
  • 38. 38 proxy_cache_methods Documentation location ~* /data { ... proxy_cache_methods GET HEAD POST; } proxy_cache_methods GET | HEAD | POST …;Syntax: Default: proxy_cache_methods GET HEAD; Context: http, server, location Definition: NGINX only caches GET and HEAD request methods by default. Using this directive you can add additional methods. If you plan to add additional methods consider updating the cache key to include the $request_method variable if the response will be different depending on the request method.
  • 39. 39 location ^~ /wordpress { ... proxy_cache cache; proxy_ignore_headers Cache-Control; } Override Cache-Control Headers Tip: By default NGINX will honor all Cache-Control headers from the origin server, in turn not caching responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response header. Using proxy_ignore_headers you can disable processing of certain response header fields from the proxied server.
  • 40. 40 location / { ... proxy_cache cache; proxy_cache_bypass $cookie_nocache $arg_nocache $http_nocache; } Can I Punch Through the Cache? Tip: If you want to disregard the cache and go strait to the origin for a response, you can use the proxy_cache_bypass directive.
  • 41. 41 proxy_cache_purge Documentation proxy_cache_purge string ...;Syntax: Default: - Context: http, server, location Definition: Defines conditions under which the request will be considered a cache purge request. If at least one value of the string parameters is not empty and is not equal to “0” then the cache entry with a corresponding cache key is removed. The result of successful operation is indicated by returning the 204 (No Content) response. Note: NGINX Plus only feature
  • 42. 42 proxy_cache_path /tmp/cache keys_zone=mycache:10m levels=1:2 inactive=60s; map $request_method $purge_method { PURGE 1; default 0; } server { listen 80; server_name www.example.com; location / { proxy_pass http://localhost:8002; proxy_cache mycache; proxy_cache_purge $purge_method; } } Example Cache Purge Configuration Tip: Using NGINX Plus, you can issue unique request methods to invalidate the cache dynamically set a variable used later in the configuration
  • 43. 43 Architecting for High Availability
  • 44. 44 Two Approaches • Sharded (High Capacity) • Shared (Replicated)
  • 45. 45 Shared Cache Clustering Tip: If your primary goal is to achieve high availability while minimizing load on the origin servers, this scenario provides a highly available shared cache. HA cluster should be Active/Passive configuration.
  • 46. 46 and Failover Tip: In the event of a failover there is no loss in cache and the origin does not suffer unneeded proxy requests.
  • 47. 47 proxy_cache_path /tmp/mycache keys_zone=mycache:10m; server { listen 80; proxy_cache mycache; proxy_cache_valid 200 15s; location / { proxy_pass http://secondary; } } upstream secondary { server 192.168.56.11; # secondary server 192.168.56.12 backup; # origin } Primary Cache Server
  • 48. 48 proxy_cache_path /tmp/mycache keys_zone=mycache:10m; server { listen 80; proxy_cache mycache; proxy_cache_valid 200 15s; location / { proxy_pass http://origin; } } upstream origin { server 192.168.56.12; # origin } Secondary Cache Server
  • 49. 49 Sharding Your Cache Tip: If your primary goal is to create a very high-capacity cache, shard (partition) your cache across multiple servers. This in turn maximizes the resources you have while minimizing impact on your origin servers depending on the amount of cache servers in your cache tier.
  • 50. 50 upstream cache_servers { hash $scheme$proxy_host$request_uri consistent; server prod.cache1.host; server prod.cache2.host; server prod.cache3.host; server prod.cache4.host; } Hash Load Balancing Tip: Using the hash load balancing algorithm, we can specify the proxy cache key. This allows each resource to be cached on only one backend server.
  • 51. 51 Combined Load Balancer and Cache Tip: Alternatively, It is possible to consolidate the load balancer and cache tier into one with the use of a various NGINX directives and parameters.
  • 52. 52 Multi-Tier with “Hot Cache” Tip: If needed, a “Hot Cache Tier” can be enabled on the load balancer layer which will give you the same high capacity cache and provide a high availability of specific cached resources.
  • 54. 54 log_format main 'rid="$request_id" pck="$scheme://$proxy_host$request_uri" ' 'ucs="$upstream_cache_status" ' 'site="$server_name" server="$host” dest_port="$server_port" ' 'dest_ip="$server_addr" src="$remote_addr" src_ip="$realip_remote_addr" ' 'user="$remote_user" time_local="$time_local" protocol="$server_protocol" ' 'status="$status" bytes_out="$bytes_sent" ' 'bytes_in="$upstream_bytes_received" http_referer="$http_referer" ' 'http_user_agent="$http_user_agent" nginx_version="$nginx_version" ' 'http_x_forwarded_for="$http_x_forwarded_for" ' 'http_x_header="$http_x_header" uri_query="$query_string" uri_path="$uri" ' 'http_method="$request_method" response_time="$upstream_response_time" ' 'cookie="$http_cookie" request_time="$request_time" '; Logging is Your Friend Tip: The more relevant information in your log the better. When troubleshooting you can easily add the proxy cache KEY to the log_format for debugging. For a list of all variables see the “Alphabetical index of variables” on nginx.org.
  • 55. 55 server { ... # add HTTP response headers add_header CC-X-Request-ID $request_id; add_header X-Cache-Status $upstream_cache_status; } Add Response Headers Tip: Using the add_header directive you can add useful HTTP response headers allowing you to debug your NGINX deployment rather easily.
  • 56. 56 # curl -I 127.0.0.1/images/hawaii.jpg HTTP/1.1 200 OK Server: nginx/1.11.10 Date: Wed, 19 Apr 2017 22:20:53 GMT Content-Type: image/jpeg Content-Length: 21542868 Connection: keep-alive Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT ETag: "58efe5ab-148b7d4" OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1 CC-X-Request-ID: d4a5f7a8d25544b1409c351a22f42960 X-Cache-Status: HIT Accept-Ranges: bytes Using cURL to Debug… Tip: Use cURL or Chrome developer tools to grab the request ID or other various headers useful for debugging.
  • 57. 57 # grep -ri d4a5f7a8d25544b1409c351a22f42960 /var/log/nginx/adv_access.log rid="d4a5f7a8d25544b1409c351a22f42960" pck="http://origin/images/hawaii.jpg" site="webopsx.com" server="localhost” dest_port="80" dest_ip=“127.0.0.1" ... # echo -n "http://origin/images/hawaii.jpg" | md5sum 51b740d1ab03f287d46da45202c84945 - # tree /tmp/nginx/micro_cache/5/94/ /tmp/nginx/micro_cache/5/94/ └── 51b740d1ab03f287d46da45202c84945 0 directories, 1 file Troubleshooting the Proxy Cache Tip: A quick and easy way to determine the hash of your cache key can be accomplished using echo, pipe and md5sum
  • 58. 58 # head -n 14 /tmp/nginx/micro_cache/5/94/51b740d1ab03f287d46da45202c84945 ??X?X??Xb?!bv?"58efe5ab-148b7d4" KEY: http://origin/images/hawaii.jpg HTTP/1.1 200 OK Server: nginx/1.11.10 Date: Wed, 19 Apr 2017 23:51:38 GMT Content-Type: image/jpeg Content-Length: 21542868 Last-Modified: Thu, 13 Apr 2017 20:55:07 GMT Connection: keep-alive ETag: "58efe5ab-148b7d4" OS-X-Request-ID: 1e7ae2cf83732e8859bc3e38df912ed1 Accept-Ranges: bytes ?wExifII>(i?Nl?0230??HH?? Cache Contents
  • 60. Thank You 60 https://www.nginx.com/blog/author/kjones/ @webopsx Kevin Jones Technical Solutions Architect NGINX Inc. https://www.slideshare.net/KevinJones62
  • 61. 61 Want more experience with NGINX caching? • Online Courses – university.nginx.com/instructor-led-training/nginx-plus- advanced-caching • NGINX Plus 30-Day Trial – nginx.com/free-trial-request

Notas del editor

  1. Hello and thank you everyone for coming! I am very excited today to be speaking about High Availability Content Caching with NGINX.
  2. In todays presentation I will give a brief introduction to NGINX and also review the structure of its configuration files. I will showcase how caching with NGINX works and explain what additional processes it uses to manage the cache. I will then show how easy it is to enable basic content caching and give various use case examples throughout the webinar. I will dive into the concept of micro caching and explain how NGINX and micro caching can be combined to drastically speed up the performance of your web applications. We will then take a look at various NGINX architectures that can be used to increase both the availability and the size of your cache. Lastly we will show various tips and tricks that will help you setup NGINX so that you can quickly and easily troubleshoot caching within your own infrastructure.
  3. Since its public launch in 2004, NGINX has focused on high performance, high concurrency and low memory usage. Getting its start as a reverse proxy and static web server, it quickly proved itself as a powerful and effective open source tool.
  4. Today, NGINX’s wide variety of features and functionality make it an ideal tool to solve complexity within your infrastructure. NGINX is now not only a web server, but also can function as a versatile and dynamic reverse proxy, a high performing and efficient load balancer for HTTP, TCP and UDP traffic, a content cache server and a streaming media server for HLS, HDS, RTMP and other popular streaming protocols. NGINX’s architecture and modular design has made the software so effective for improving performance, reliability, and scale, that it has gained massive adoption on the web.
  5. Today that we know of there are 245 million sites running NGINX … and that number is growing.
  6. NGINX powers more than half of the top 10,000 busiest sites on the web, and we are the industry leader for application delivery among the busiest applications and sites in the world. We are also now the number one chosen web server among the top 100,000 busiest web sites.
  7. NGINX also powers over 40% of all AWS instances…
  8. And because the versatile and lightweight nature of NGINX we are currently one of the most popular and widely used repositories on Docker Hub with over 6.2 thousand stars and 10 million plus pulls!
  9. The configuration files for NGINX are very strait forward and easy to read but I do want to take a few moments to review these for anyone new to NGINX or anyone who may just need a quick refresher.
  10. NGINX consists of modules which are controlled by directives specified within configuration files. These directives are divided into two types, simple directives and block directives. A block directive ends with a set of additional instructions surrounded by “Curly” brackets. If a block directive can have other directives inside brackets, it is called a context. As shown here we can see multiple types of contexts all the way from the main context to the location context.
  11. Within those contexts are the simple directives. In some cases simple directives can be used in various types of contexts so always be sure to check our documentation on NGINX.org to determine their compatibility.
  12. Additionally, these simple directives have parameters which define its configuration and in turn define how NGINX should behave when that directive is being used.
  13. One of the most useful features of NGINXs is the ability to embed variables within its configuration files. There are a number of useful ways that variables can be used within the configuration to drastically enhance its functionality. A common example of using variables within the configuration is the NGINX log_format directive which allows you to write a specific list of variables into a specific access log. We will get into that a little further in the webinar.
  14. Also with NGINX you have the power to create variables on the fly using such directives as map or split_clients and then in some cases use those variables values as parameters later within the configuration. This gives NGINX a large amount of power and allows parts of its configuration to be dynamic in nature.
  15. Now lets take a look at how caching works and showcase how it can easily be enabled with only a few NGINX configurations.
  16. The idea of content caching is rather simple… A client makes a request for a resource… along the way the request passes through a proxy cache server, the proxy cache determines if the requested resource is already in its cache and if needed will reach out to the origin server to fetch the requested resource for the client. If the origin server sends a response to the proxy cache, the proxy will determine if it should cache the response based on both its configuration and any cache control headers that come back from the origin server. It will then serve the response to the client while also caching the resource and in turn any further requests for that file will be available in its cache for the set period of time defined within the configuration.
  17. Its important to know that by default, NGINX will comply with all cache control headers that are sent in the origin servers response. These cache headers tell NGINX various information about the resource such as… - how to cache the response and for how long - or perhaps a date and timestamp defining when the resource should expire - It could provide info on when it was last modified - or perhaps contain the Etag header which provides an identifier so that you can see if the version has changed since a previous cached request In most scenarios these are set by the developer or application owner for a specific reason but if needed they can be ignored within the NGINX configuration and later we will show how that can be done.
  18. Now that we understand the basic concepts of caching, Enabling caching with NGINX is actually quite simple and only requires a handful of configurations to get started.
  19. First you will need to define the proxy_cache_path, this configuration does exactly that, it defines the path on disk that NGINX will use to store its cached responses. There are a number of fine tunable parameters that can be enabled for this directive but for basic caching we can simply specify the path and the key memory zone name and size. You should also define a maximum disk space to allot for caching and set a global inactive time for unused resources as shown in the example so that NGINX can be efficient and clean up resources as needed.
  20. The next directive, which configures the proxy cache key has a default setting so it actually doesn’t need to be defined however, it is important to know how it is used from the NGINX perspective and in some cases it makes sense to change this based on your application. The key by default is configured to uses a combined string of variables, starting with the scheme variable (such as HTTP or HTTPS), the proxy_host variable which is the hostname or upstream name on the proxy pass directive and the request_uri variable which is the full request URI which also contains any URL arguments. When NGINX creates a key for a resource this combined string of variables is hashed using an md5sum function, that key along with other metadata about the resource is placed in an NGINX shared memory zone, that way the cached element can be located and served from disk quickly. In some cases you may find you want to modify this string of variables. As an example show on this slide you may want to add a unique cookie variable that relates to a specific user. Or perhaps you want to cache responses differently based on a specific user agent or other HTTP header contained in the request. There are many reasons why you might change this and could be unique to each application you are caching. It is also important to know that this cache key can be defined in multiple parts of the NGINX configuration, therefore it is completely feasible to have unique keys for specific servers and or specific locations. It all depends on what type of behavior you want to get from caching.
  21. The proxy_cache configuration is rather simple. It defines the specific proxy cache shared memory zone that should be used to cache the resource. You can specify a different memory zone within different servers and or locations. In the example we show that you can define a separate cache for video and a separate cache for images.
  22. Lastly, we should tell NGINX what kind of responses we want to cache and for how long. In the example we say that any type of responses, we want to cache for a period of 1 day. Its very customizable. The any parameter can be replaced with specific http status codes and the time defined can be seconds, days, weeks, months or even years.
  23. Thats it! So pretty strait forward so far, we have a basic caching configuration with only three additional settings… In the example, we have a 100 gigabyte cache that has an inactive timer of 7 days, we are caching all requests that begin with URI /images and have a response of 200, 301, or 302 and we will cache these responses for a max duration of 12 hours.
  24. So to recap the process from an NGINX perspective, a client requests a resource. When the request comes through NGINX, the proxy_cache_key is hashed and NGINX determines whether that key already exists within the defined memory zone. If it does not it will send the request to the origin server. The origin server will then send its response back to NGINX. Then based on NGINXs configuration in combination with the origin servers response headers it will create the cache key if needed and cache the response to disk while also serving that request to the client. *slight pause…..*
  25. Once caching is enabled you will notice there are two additional processes which you may not be familiar with. The first is the cache manager process, which is activated periodically to check the state of the cache. If the cache exceeds the size or is considered inactive the resources will be removed from the cache. The second process, is the cache loader process which runs only once after NGINX starts to load the metadata about previously cached resources into the shared memory zones. The larger the amount of cache items, the longer this will run.
  26. Something also commonly overlooked is the fact that caching with NGINX is not only restricted to HTTP, NGINX also has proxy cache directives that support FastCGI, UWSGI and SCGI. …also if required NGINX can also be configured to pull resources directly from memcached.
  27. So far we have covered the basics of content caching. In this next part of the webinar we will talk about the concepts of micro caching and showcase how it can be quickly and easily be setup with NGINX.
  28. There are different types of content that are commonly proxied through NGINX… there is static content that is easily cacheable, such as Images, CSS, Javascript, or simple HTML Then there is user specific content on the far right of this diagram that tends to be less cacheable such as shopping cart data, account data or data that is unique and specific to a user. But then there is this content that kind of sits in the middle… it might be dynamic in nature, such as blog posts, metrics or possibly API calls, in most cases these resources can’t be cached for long however if you are receiving high amount of load then NGINX can be used to cache the responses for a very short period and drastically increase the performance of your application. This is where micro caching can really make a difference.
  29. Setting up micro caching is very easy and only involves a handful of settings… the first thing you want to do is enable keepalives between NGINX and the upstream server. This will increase speed and cutdown on latency by removing the need to repeatedly open and close a new connection to your origin server. Next we need to set a very small cache size and short inactive timer on the proxy_cache_path directive. Since the items in cache will only need to be there for a short period of time we can set this very low. Then we can simply enable caching for 1s on all responses for the entire virtual server by placing it outside of a location and inside of a server block. It is also important to know that you will need to set the proxy_http_version to 1.1 if you plan on using keepalives. You will also need to clear the connections header as shown in the example so that NGINX will keep the connection open between NGINX and the origin server. That’s it, you now have basic micro caching enabled.
  30. However there is still some final touches we can add…
  31. Another new and extremely useful directive is proxy cache background update. This directive will tell NGINX to start a background subrequest to update an expired cache item, while a stale cache response is returned to the client. Additionally you should set proxy cache lock to on… which will explain next and also set the proxy cache use stale directive to “updating” so that any requests for that resource during the subrequest update will automatically receive that stale version and not result in duplicate requests to the origin server.
  32. The proxy_cache_lock directive mentioned in the previous slide tells NGINX to only allow one request for a given cache element to be populated at any given time and If for some reason another request does come in for the same element, NGINX will either wait for the response to appear in the cache or for the element to be released and it will try to be repopulated. If you would like to fine tune the timeout and lock age you can do so using proxy cache lock age and proxy cache lock timeout. By default these are both set to 5 seconds.
  33. Also previously mentioned is the proxy cache use stale directive which tells NGINX to serve a stale response under certain conditions. For example you could set it to serve stale if for any reason NGINX is throwing a general error or a specific error. Or perhaps the upstream server is timing out or NGINX is in the process of updating its resource. This gives you the ability to make your cache data highly available in the case of a server catastrophe.
  34. With those three final directives, we now have the final touches on our micro caching configuration.
  35. Now that you have a grasp on how to enable basic caching and micro caching, I want to take a closer look at some of the other cache directive that allow you to fine tune your cache even more.
  36. There might be situations where you want NGINX to revalidate the version that it already has in its cache. This can be particularly useful if you want to save on bandwidth to your origin server and also on disk writes to your cache server. In order to tell NGINX to revalidate the cached resource you can simply turn proxy cache revalidate to “On” and NGINX will make a conditional GET request with either the if-modified-since or the if-none-match headers to determine if the version it already has in cache has been modified. The if-modified-since header that NGINX sends contains the actual date and time stamp to compare with the origin servers last-modified header. … and the if-none-match header that NGINX sends contains an unique identifier which will be compared to the origin servers ETag header to validate if the version of the file has changed.
  37. In some cases you may find that you are caching files that are not accessed often enough to merit a cache. Caching such resources add unneeded disk latency and take up space. To help with this NGINX has the proxy cache min uses directive. Setting this to any number above the default value of “1” will tell NGINX to only cache the resource once the counter for that element is reached. In the example any requests containing the string /legacy will require 5 requests before populating a cached element.
  38. Its also important to know that by default only GET and HEAD requests are cached by NGINX, if for some reason you do want to cache other methods you will need to add them using the proxy_cache_methods directive. Keep in mind if the response is different for various methods you also may want to consider adding the “$request_method” variable to the proxy cache key so that NGINX will treat the requests as separate cached elements.
  39. As mentioned previously in this webinar, NGINX by default will honor all Cache-Control headers sent in the origin servers response, therefore NGINX will not cache responses with Cache-Control set to Private, No-Cache, No-Store or with Set-Cookie in the response header. In some cases the origin server’s response headers may be out of your control, therefore if needed you can use the proxy_ignore_headers directive to completely ignore any headers with a specific name.
  40. Another powerful feature of NGINX is the ability to bypass the cache on a case by case basis. If you have a client that wants to connect through the proxy layer but doesn’t want to use the content cache for a specific resource, you can configure the proxy_cache_bypass directive. Once configured, If at least one value of the parameters is not empty and is not equal to “0” then the response will not be taken from the cache and fetched directly from the origin server. In the example given NGINX will check for any requests containing either a cookie, URL argument or HTTP request header named “nocache” which is not empty and is not equal to “0”. If found NGINX will bypass the cache and fetch the response directly from the origin server.
  41. NGINX Plus extends the content caching capabilities of NGINX by adding support for cache purging. If you have a requirement to invalidate cache in real time the The configuration checks to see if a string exists that is not empty and is not equal to “0”. If found during the request NGINX will completely remove that cache key from its memory zone. When a new request comes through for that resource it will be repopulated.
  42. Setting up the cache purge API is rather easy… We use the map directive which allows us to inspect the request method variable during a request and check if the value is equal to the string “PURGE” if it does match we will set a new variable with the name “$purge_method” with a value of “1”. Then later in the configuration we can use that variable dynamically to determine if we need to purge the cache.
  43. Now that we have covered how to enable basic caching and the concepts and methods for micro caching, we can now take a look at high availability caching with NGINX. High Availability is a very critical component when architecting a content cache layer with NGINX. In most cases the whole purpose of creating a cache layer is to save on hits to your origin servers and maintain availability, this can make or break your application uptime and reliability.
  44. There are two approaches to architecting high availability with NGINX. One is creating a sharded cache that provides your cache layer with high capacity. This is commonly important when dealing with large or an extremely high number of cached elements. The second approach is shared which should be used when you need to maintain absolute 100% uptime and cannot risk any downtime to your caching layer.
  45. In a shared cache cluster each NGINX instance has a duplicate of each cache element. The idea is quite simple… The client makes a call to the cache layer… which already has some sort of networking layer high availiability. The HA cluster should be using an active/passive configuration. This can be accomplished using either VRRP or some other L4 load balancer between the client and the NGINX cluster. If you are an NGINX Plus customer we have and NGINX-HA-keepalived package available that can be used to easily setup an HA solution using VRRP.
  46. Once configured if one particular NGINX instance goes down they both are guaranteed to have a copy of the cache element. The failover is all handled within the NGINX configuration. Lets take a look at both the primary and the secondary NGINX cache server configuration to see how this is accomplished.
  47. The primary server is configured to always attempt reach to the secondary NGINX instance first… if the secondary NGINX server is down for any reason it will failover automatically to the origin server which is set as a backup.
  48. Then the secondary server is configured much like a typical proxy cache server reaching directly to the origin.
  49. There may be cases where you need to have a larger sharded cache, the goal here is capacity combined with high availability. In this configuration you need two NGINX layers, a LB tier and a cache tier. The LB tier is configured to use the hash load balancing algorithm. The hash load balancing parameter is configured to use the proxy cache key. This way each server on the cache tier has a unique cache element. This way if one of the NGINX cache severs goes down you only lose a percentage of the content cache. In a 4 server cache tier, 1 instance going down could potentially cause a 25% loss in cache depending on your hit ratio. The higher number of NGINX cache servers the larger your cache becomes and the more highly available it is.
  50. As mentioned before we use the entire proxy cache key string as the hash load balancing parameter. This allows requests for specific cache keys to be unique on each proxy cache server.
  51. It is possible to combine the load balancer and cache tier, however it does makes the configuration a little more confusing and can be a little more difficult to manage. In this scenario, requests coming into the cluster are distributed using either a Layer 4 load balancing, RR DNS, or possibly VRRP. Then from there the traffic is internally load balanced and cached. We won’t get into the examples for this configuration here but it is entirely possible to accomplish with NGINX.
  52. Lastly we can create an even more highly available cache configuration, similar to the sharded cache however in this scenario we can also enable caching on the load balancer tier. This way you can create a special cache for resources that are accessed more often or for response that need to be less latent.
  53. Now lets review some tips and tricks that will help you get started on the right foot when setting up and troubleshooting caching with NGINX.
  54. The first thing I always find useful is good logging. Logging is your friend, if you are trying to troubleshoot the cache or want to be able to help clients or developers troubleshoot the cache it is a good practice to customize the log format to contain the information you need. I always recommend adding the request ID variable to the log format which is a unique identifier that NGINX generates for every single request. We can instrument this variable later as an HTTP response header as well so we can quickly search the logs when troubleshooting the cache. I also always recommend logging the proxy cache key, there are certain scenarios where you will have various proxy cache keys so be sure to specify the key name differently if you do have more than one. In the example here we set the key PCK to the string of the default proxy cache key. Lastly, I always log the upstream cache status variable. This allows me to identify precisely how NGINX behaved when it served the request.
  55. I also always instrument two additional HTTP response headers in my response. First I add the request ID variable that I mentioned earlier so that clients can identify the exact request ID if they come to me later and in turn I can quickly locate the request in the NGINX log file. Second I always add the upstream cache status so that I can see NGINX’s cache behavior in chrome developer tools or from cURL without the need to dig up the access logs. This is very useful when testing and also can give insight on the behavior of NGINX to your clients or developers of the application.
  56. In this example here you can see after configuring the add header directives I am able to use curl with a dash and capital “I” which shows the HTTP response headers and allows me to quickly identify exactly how NGINX is behaving for a particular resource.
  57. If you do know the request ID and you’ve properly configured the log format you can locate that in the logs easily using grep Once you have the full log you can then extract the string which was used for the proxy cache key and in turn be able to hash it yourself using the binary md5sum program. Once you have the hash you can easily locate the cached element that is sitting on disk. The hash of the string will be identical to the file name.
  58. You can also take a closer look at the actual cashed resource on disk and you will notice it has the cache key string placed at the very top of the file, therefore you can also use grep to located the cached resource as well.
  59. At this time I would love to take any questions you might have. If you do you have questions can submit them using the Q&A Panel.
  60. Thank you everyone for coming!