2. me
scribbling and doodling on *IX operating systems for 10+ years
working as sysadmin/DevOps with a hint of networking
tools of the trade
nginx apache tomcat (ugh!)
python php java (ugh!)
postgresql mysql
memcached redis mongo
not a developer, sorry :)
3. before we begin
know your environment
server specs (cores, memory, bandwidth)
virtualenv or global
know your application
database pooling agents and latencies
python version requirements (!)
Linux kernel tunables
limits.conf (file descriptors)
ipv4.tcp_tw_reuse (not recycle)
kernel.shmmax
ipv4.ip_local_port_range
4. what it is
full stack for building hosting services
pluggable architecture
mostly used as application server for Python apps using WSGI
“WSGI is the Web Server Gateway Interface. It is a specification that
describes how web server communicates with web applications, and how
web applications can be chained together to process one request.”
rules
versatility
performance
low-resource usage
reliability
developed by @unbit (Italy)
awesome docs
5. installation
PyPI
most recent versions, maintained by devs
pip install uwsgi
use virtualenv when possible
watch out for dependencies - check what you actually need, the
compile errors/warnings are very descriptive
distro packed versions are *always* outdated
“from source” - please avoid :)
6. configuration
choice of xml, ini, command-line arguments etc.
read the docs carefully
don’t reinvent the wheel
"Please, turn on your brain and try to adapt shown configs to your
needs, or invent new ones."
8. start it up
activate ve
uwsgi --xmlconfig /home/user/uwsgi/uwsgi.xml
configure and start nginx
location / {
include uwsgi_params;
uwsgi_pass unix:/home/wmaster/uwsgi/uwsgi.sock;
}
ta-daa!
tip: socket based pass doesn’t consume server TCP port usage = less
SYSCPU time on your server; you can use TCP but check your kernel
tunables before! (tcp_tw_reuse, ip_local_port_range
tip2: you can also use mod_uwsgi and apache
9. cool conf stuff (I)
harakiri (timeout)
every request is timestamped
If a master process finds a worker with a timestamp running more than
specified timeout it will kill it
logs: “F*CK !!! i must kill myself (pid: 9984 app_id: 0)”
reload-on-rss
gracefully reload a worker after the memory consumption of the worker
goes above this threshold
you’re alive even if you have a nasty memory leak
max-requests - gracefully reload a process after n requests
max-worker-lifetime – gracefully reload a process after n seconds
logs: “...The work of process PID is done. Seeya!"
10. cool conf stuff (II)
reload-mercy
wait for n seconds for worker to die during reload/shutdown
touch-reload
reload your app by touching a single file (your main project file)
attach-daemon
attach-daemon = memcached -p 11911 -u user
when uWSGI is stopped or reloaded, memcahed is destroyed
smart-attach-daemon
smart-attach-daemon = /tmp/memcached.pid memcached -p 11911 -d -P
/tmp/memcached.pid -u user
memcached survives
11. production settings
processes
“There is no magic rule for setting the number of processes or threads to use.
It is very much application and system dependent. Simple math like
processes = 2 * cpucores will not be enough. You need to experiment with
various setups and be prepared to constantly monitor your apps.”
threads
“If you need threads, remember to enable them with enable-threads.”
start with: processes = n, threads = 1
multi-threading should be thoroughly tested! (debugging is very hard)
rule of thumb for avoiding OOM when using threads=1:
available memory = process n * reload-on-rss value
suspect everything, test everything, log everything
12. performance testing
get URLs which your applications deliver
run siege / apachebench while keeping your eye on
monitoring/logging
hardware is cheap
buy more memory
get a faster CPU
scaling up is always easier than optimizing your app
13. production deployment
keep a single project file
touch it when updating
art of graceful reloading
https://github.com/unbit/uwsgi-
docs/blob/master/articles/TheArtOfGracefulReloading.rst
“Some applications or frameworks (like Django) may load the vast
majority of their code only at the first request.” – causes timeouts if
you have high requests/second rate
keep multiple application servers with reverse-proxy (or LB in front)
update one server at the time, warm it up with siege/ab
14. monitor all the things!
New Relic
uwsgitop
Sentry
Cacti
dstat, iostat, htop etc.