LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Turn it into a solid stack of reusable components
Keep iterating until you end up with mosaics of micro-services
the container A lightweight VM?
A chrooted process?
An application packaging technology?
Containers kick ass despite limitations
● Great for dev on a single node.
● Ideal for CI.
● It gets tricky in multi-node
● A lot of hacking required to
rollback, scale, monitor,
A lightweight Linux distro for clustered
deployments that uses containers to
manage your services at a higher level
of abstraction, instead of installing
packages via yum or apt.
● A distributed key-value store that
provides a reliable way to store data
across a cluster of machines.
● Values can be watched, to trigger
app reconfigurations when they
● Odd sized clusters guaranteed to
● JSON/REST API.
● An etcd backed network
fabric for containers.
● A virtual network that
gives a subnet to each
host for use with
● An etcd backed,
distributed init system
● Treat CoreOS cluster as if
it shared an init system.
● Graceful updates of
CoreOS across the cluster.
● Handles machine failures.
● Container runtime by
● rkt is an implementation
of the App Container Spec
● rkt features native support
for fetching and running
Docker container images
Kubernetes is an open source
orchestration system for containers.
● A collocated group of containers
with shared volumes. Always
executed on the same node.
● The smallest deployable units.
● Correspond to a colocated group of
applications running with shared
● Ensure that a specific
number of pod replicas are
running at any one time.
● Replace pods that are
deleted or terminated.
● Get rid of excess pods.
● Key-value pairs attached to
pods and other resources.
● Specify identifying
properties of resources.
● Sets of objects can be
identified by label selectors
● An abstraction that uses a
selector to map an incoming
port to a set of pods.
● Needed to keep stable front-
ends since pods are mortal
and each pod gets its own ip
● The user declares the
target state e.g. “I need 5
uwsgi & 10 celery servers
active at all times”.
● Kubernetes will re-start,
replicate & re-schedule
containers to ensure that
this is met.
● By increasing or decreasing the
replication factor of each pod,
respective services will scale up
● Auto-scaling of services
depending on pod CPU
● New nodes can be added to
increase cluster capacity.
High availability of Kubernetes can
be achieved with CoreOS (e.g. fleet),
but not without some serious effort...
Used to be an issue, promised to be
resolved in Kubernetes v1.1.1
“included option to use native IP
tables offering an 80% reduction in
tail latency, an almost complete
elimination of CPU overhead “
Stateful services and Kubernetes do
not fit well. There are some “exotic”
ways to solve the problem, but they
are either still in beta or under heavy
development (e.g. flocker)
Kubernetes is configured to work out
of the box only for GCE and EC2. In
any other case manual configuration
of load-balancers and external DNS
services is to be expected.
Kubernetes on top of CoreOS is a
completely new way of doing things...
operation workflows for DevOps
should be heavily adjusted to this new
way of things…
You could end up building your own
tools around Kubernetes...
● One click deployment!
● Replicate as much of the production setup as possible
● Everything pre-configured for the developer (e.g. add-ons)
Goals for the development process:
Our experience so far...
Ended up building our own
everything is ctl nowadays…
does anyone remember tail -f ???
Works locally but not in prod???
Not the case anymore...at least
most of the times
● Higher demands on developer’s
● Allows us to get rid of distro specific
● Adds new dependencies: vagrant &
● Local dev environment is very close
CI Workflow explanation
1. Developer opens a PR against the staging
branch on Github, triggers Jenkins job.
2. Jenkins setups the env runs the tests and
posts the results back to the PR.
3. Reviewer merges to staging branch after
manual code review.
4. Jenkins builds pre-production containers
and pushes them to the registry.
5. Jenkins triggers deploy on pre-production
6. Jenkins runs stress tests against pre-
7. Reviewer compares stress test results with
● Locally using cAdvisor, heapster,
influxDB & Grafana.
● Externally using 3rd party
● Enhance Mist.io to monitor
Kubernetes clusters and to
trigger actions based on rules.
● For the cluster services
through fleet: multiple
● For our own services,
especially the stateful
ones (e.g. MongoDB).
● Deploy Kubernetes cluster on
another provider or region.
● Deploy our apps on the new
● Restore data from latest
backup or perform live
migration, depending on the
type of disaster.