Nginx pronounced as "Engine X" is an open source high performance web and reverse proxy server which supports protocols like HTTP, HTTPS, SMTP, IMAP. It can also be used for load balancing and HTTP caching.
2. ● What is Nginx
● Why use Nginx
● Installing/running Nginx
● Nginx process model
● High availability
● Understanding configurations
● What is Nginx
● Why use Nginx
● Installing/running Nginx
● Nginx process model
● High availability
● Understanding configurations
Agenda
3. What is Nginx
● Pronounced as “Engine X”
● Open source web and reverse proxy server
● High performance HTTP, HTTPS, SMTP, IMAP,
POP3 server
● Load balancing and HTTP caching
● Asynchronous event-driven architecture
5. Why use Nginx
● Lightweight with small memory footprint
● Uses predictable memory under load
● Provides high level of concurrency
● Serves static content quickly
● Handles connections asynchronously
● Uses single thread
7. Starting/restarting Nginx
● Check that Nginx is running
sudo service nginx status
● Starting, stopping and restarting Nginx
sudo service nginx start
sudo service nginx stop
sudo service nginx restart
11. Child process
● Worker is single threaded
● One worker process per CPU core
# directive
worker_processes auto;
● Communicate with each other using shared memory
● Handles multiple connections asynchronously
● Polls for events on listen & connection sockets
12. Child process
● Events on listen sockets start a new connection
● Events on connection socket handles subsequent
requests
● Connections are submitted to state machine
HTTP
Stream
Mail (SMTP, IMAP and POP3)
Web server
Created by Ignor Sysoev in 2002 to address C10K problem
Draw comparisons between request/thread
Take a scenario in which a server on a URL transmits 100KB of information almost immediately. On client side the connection is relatively slow, let's say 10 KB/s. now it'd take approx 10 seconds for a client to get this 100KB of data and during this time the connection is still alive. Now image if each connection uses 1 MB of memory on the web server, for 1000 clients it'd be using 1000 MB (almost 1 GB) of memory which is a lot !! So a web server software should be able to scale non-linearly with growing number of simultaneous connections.
Ppa → personal package archives
Master process initializes worker process with nginx configurations
ps -ef --forest | grep nginx
ps (PROCESS STATUS)
-e displays all processes
-f adds full details
--forest displays hierarchy
New incoming connections trigger events
Each new connection creates a new file descriptor and uses small amount of memory
State machine is a set of instructions that tell NGINX how to process a request
Connections are processed in a highly efficient run-loop in a limited number of single-threaded processes called workers. Within each worker nginx can handle many thousands of concurrent connections and requests per second.
Areas within {} are called contexts
Directive can only be used in the contexts which they are designed for
Events context contains directives that define how work processes should handle connections. Connection processing method is automatically selected based on the most efficient one available for the platform. Linux - epoll
worker_connections sets the maximum number of simultaneous connections what can be opened by a worker process. Note that this should not exceed the maximum number of open files limit
sendfile copies data directly from one file descriptor to another removing the need to copy data from file descriptor to buffer, helps in serving static files
tcp_nodelay disables nagle’s algorithm. Nginx uses tcp_nodelay on keepalive connections
tcp_nopush which activates tcp_cork on linux blocks until packet reaches a minimum size before sending
Areas within {} are called contexts
Directive can only be used in the contexts which they are designed for
Events context contains directives that define how work processes should handle connections. Connection processing method is automatically selected based on the most efficient one available for the platform. Linux - epoll
worker_connections sets the maximum number of simultaneous connections what can be opened by a worker process. Note that this should not exceed the maximum number of open files limit
sendfile copies data directly from one file descriptor to another removing the need to copy data from file descriptor to buffer, helps in serving static files
tcp_nodelay disables nagle’s algorithm. Nginx uses tcp_nodelay on keepalive connections
tcp_nopush which activates tcp_cork on linux blocks until packet reaches a minimum size before sending