Docker is the latest hotness in the deployment automation space, and opens a whole new world of opportunities in how we bundle, deploy and manage our running apps.
16. Origins
• Google circa 2007
• Linux cgroups (control groups) (resource limits)
• Linux namespaces (resource isolation)
• Docker circa 2013
• Layered virtual filesystem
• One stop shop encapsulating many Linux kernel features
#DV14 #Docker4Fun @cquinn
20. Universal Deployable Artifact
• Complete: Everything the app needs is in the artifact.
• Small: The artifact is small enough to be easily managed.
• Immutable: The contents of the artifact can’t change.
• Universal: The artifact can run on any Linux host.
• Deployable: The artifact can actually be run directly, without
being unpacked or installed.
#DV14 #Docker4Fun @cquinn
21. Image Sharing
• Universal Images are Easy to Share
• https://hub.docker.com/
#DV14 #Docker4Fun @cquinn
25. Docker Environment on Mac
• boot2docker
• and/or: brew install docker
• Installs virtual box with a tiny Linux that runs Docker
• Docker cmdline client runs on Mac
#DV14 #Docker4Fun @cquinn
26. Docker Environment on Windows
• boot2docker
• Installs virtual box with a tiny Linux that runs the Docker daemon
• May have to shell into the VM to work
• (I have no direct experience)
#DV14 #Docker4Fun @cquinn
29. Client / daemon Comm
• Clear vs TLS
• Boot2docker now defaults to TLS
• Can switch to clear
• /var/lib/boot2docker/profile : DOCKER_TLS=no
#DV14 #Docker4Fun @cquinn
36. The Docker Daemon
• Use same binary as cmdline Client
• Runs on init or as needed
• Does all the work
#DV14 #Docker4Fun @cquinn
37. The Docker Daemon
• Uses libcontainer to talk to Linux kernel
• Starts process group for container
• Creates namespaces for process group
• Creates cgroups for resource quotas
• Controls network access, port mapping
• Controls volume mounting
#DV14 #Docker4Fun @cquinn
39. Docker Daemon REST API
• Docker daemon exposes an HTTP JSON over REST API
• See: https://docs.docker.com/reference/api/docker_remote_api/
• Version 1.15
• Normally this is over a local unix socket, but can go over tcp as
well.
#DV14 #Docker4Fun @cquinn
40. Talk to the Docker Daemon
http http://192.168.59.103:2375/v1/_ping
http http://192.168.59.103:2375/v1/version
http http://192.168.59.103:2375/v1/info
http http://192.168.59.103:2375/images/json?all=0
http is HTTPie, a fancy curl
https://github.com/jakubroztocil/httpie
#DV14 #Docker4Fun @cquinn
42. Images, Registries and Containers
• Image is the package of bits (you might think of this as the
container, but that’s not exactly right)
• repository (think git repo)
• tag
• ID
• Registry is the repository of images
• Container is a running self-contained process group
• Dockerfile is the Makefile for Docker images
#DV14 #Docker4Fun @cquinn
76. Extra Credit
• Can also hookup InfluxDB + Grafana
• http://influxdb.com/
• http://grafana.org/
• Or use Heapster across a cluster
• https://github.com/GoogleCloudPlatform/heapster
#DV14 #Docker4Fun @cquinn
78. Clustering with Docker
• Dockers are black boxes
• Config goes into args & env.
• Functional I/O is on network ports.
• System needs to Solve
• configuration delivery
• dynamic service addressing
#DV14 #Docker4Fun @cquinn
88. fleet cloud-init
coreos:
etcd:
# generate a new token for each unique cluster from
https://discovery.etcd.io/new
discovery:
https://discovery.etcd.io/b6efb8e37cfaafbabaeeca4392d74909
# multi-region and multi-cloud deployments need to use
$public_ipv4
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
#DV14 #Docker4Fun @cquinn
95. Kubernetes
• Googles next generation “lmctfy” for Docker
• https://github.com/GoogleCloudPlatform/kubernetes
• Available on GCE
#DV14 #Docker4Fun @cquinn
101. Docker is the latest hotness in the deployment automation space, and opens a whole
new world of opportunities in how we bundle, deploy and manage our running apps.
Learn what Docker is all about and how to get started working with it.
During this university, you will learn how to get Docker installed and get started using it
to build and run your own containers. We'll take Docker apart and see how it works
under the hood. Then we'll zoom out and experiment with Fleet and Mesos –
interesting technologies built upon Docker for deploying containers to clusters of
machines. All the while, we'll talk about how this new technology is poised to radically
change how we think about deployment.
Notas del editor
Self-contained complete app, or mini-machine ready to run.
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.
Encapsulation like a VM image
Lean and mean like a tar ball
There are many different kinds of apps and tools and libraries for writing apps, and these come in many flavors and different needs for deployment. What system libraries they need, what language runtimes, etc. And often these libraries can have conflicting versions and other interactions that can be extremely difficult to deal with.
At the same time, there are different machine and infrastructure types. Development machines, VMs, internal datacenter or partner / customer datacenter, not to mention all the kinds of public, private and hybrid clouds.
If you lay out the software variables on one axis, and the hardware ones on the other, you get a huge nasty matrix. And, there’s a lot of work to do every time you add one thing to either axis.
And there isn’t always a clear separation of responsibility for what the app developers and the devops folks are responsible for.
Back in the day, the cargo industry had the same situation. Every kind of cargo was packed differently and had unique handling requirements. And each leg of the journey for the cargo had its own unique characteristics and cargo equipment.
They too had a huge matrix problem. Moving goods from one place to another required knowing the exact route that the goods would take, and a negotiation with every provider on the way.
That was eventually solved by the introduction of the intermodal shipping container. Really very simple: it was just a big metal box that could be loaded up with anything. The sizes were standardized, as well as was the way they stacked and how they were picked up by cranes. Someone shipping goods from point a to point b now didn’t have to know what modes of transportation were used, and in fact they could change based on needs and prices.
Side note: the shipping container was not a big-bang invention, and not a committee-developed standard. It was an ad-hoc adaption of a de-facto standard mostly introduced and push by one iconoclast entrepreneur, Malcom McLean, who wanted to offer an end-to-end shipping solution.
Book: http://www.amazon.com/The-Box-Shipping-Container-Smaller/dp/0691136408
Now in the software world we have the same solution. A container that can hold all kinds of our software cargo, and that can be deployed on any kind of hardware.
And now the matrix looks a whole lot easier. Instead of an N*M problem, it is just N+M.
Virtualization
Sits on top of a machine abstraction of some kind
Usually have an entire OS in an image
Since the machine is big, often multiple apps are still deployed to each
Containerization
Sits on top of a Linux host OS and shares the kernel. Very fast.
This host can be very minimal.
Container has its own copy of all the libs and other files that the deployed app will need.
As apps are built and rebuilt, many shared components can be packaged separately and only the actual app bits updated incrementally.
Docker doesn’t invent any single major thing, but it does package some existing tech in a very easy to use bundle that sets a de-facto standard. Just like the real shipping container.
Certainly this sounds cool, but what’s the big deal?
We already have tools that can deal with (most of) this deployment already.
And a lot of this tech is not new.
Developers can package complete artifacts for their apps.
Deployment systems now have a standard artifact to deploy.
How many in the audience develop on Mac? Windows? Linux?
Here’s a picture of what this boot2docker inception looks like
[TODO: redo this picture to show Docker client]
Images are actually built in layers. Like ogres. Or parfaits.
Best practices in keeping these layers clean and small:
base image
install updates
install common packages
install app
Also, not the difference between the Host OS that is hosting Docker, and the Base OS that is the base image in the container.
This example is in cquinn/ticktock
Run the ticktock image.
Show ‘docker help run’, some interesting flags:
Use -it to run it in foreground. ctrl-c to get out.
Use -d to have it daemonize. Kill it later with ‘docker kill’
Use ‘docker ps’ to see it running.
Use ‘docker logs -f’ to watch it go.
Use ‘docker pause’, show logs, then ‘docker unpause'
Use ‘docker stop’ to stop it, ‘docker ps -a’ to see it stopped, then ‘docker rm’ to delete it.
Show ‘docker help run’, some interesting flags:
Use -p or -P to map ports.
Use ‘docker ps’ to see port mapping, 9090->8080
Curl port to see webhellogo output.
Use browser to see webhellogo page.
Pause and unpause, or stop and start container: Counter continues.
Stop and rm the container: Now notice that the counter state lived in the container and is lost.
Mount some storage:
Use -v to mount a host volume to save state
Stop, rm, start container
Use browser to see webhellogo page, counters still going!
Lets build something like this, but with Python and Redis.
Use fig to build and run figgy: fig up
Uses 'docker run —link'
Talk about how the port mapping works
Docker containers are nice immutable deployable artifacts, and just need a few things plugged into them when deployed. These are the configuration, and a system for dynamic service discovery.
Fire up CoreOS cluster in AWS with the above clout-init as user data
Use local docker cli to talk to multiple CoreOS hosts using -H in dall.sh
Use dall.sh to list images, ps containers, etc.
Start a container on the multiple hosts.
Use dall.sh to see the containers running.
Unit part tells systemd about how this service unit fits in.
Service part tells systemd how to start and stop the service.
Show cluster machines: ./fc.sh list-machines
Show cluster units: ./fc.sh list-unit-files
Show cluster unit states: ./fc.sh list-units
fleetctl service control pairs:
submit / destroy
load / unload
start / stop
Mesos has a master / slave architecture.
Single master, but with standbys for HA.
Zookeeper is used for leader election, but also exposed to frameworks.
Schedulers can be added to the master.
Executors can be added to the slaves.
Bundled together, Schedulers and Executors are Frameworks.
Docker support is just one kind of executor