2. What is continuous delivery?
• Software development practice
• Code changes automatically
• built
• tested
• and prepared for a release to production
• Extends continuous integration
• Developers approve the update to production
• Different from continuous deployment
• Beyond just unit tests
8. Docker and Docker Toolbox
• Docker (Linux > 3.10)
• Docker Toolbox or Docker for OS X, Windows)
• Define app environment with Dockerfile
9. Dockerfile
FROM ruby:2.2.2
RUN apt-get update -qq && apt-get install -y build-essential
libpq-dev
RUN mkdir -p /opt/web
WORKDIR /tmp
ADD Gemfile /tmp/
ADD Gemfile.lock /tmp/
RUN bundle install
ADD . /opt/web
WORKDIR /opt/web
10. Docker Compose
Define and run multi-container applications:
1. Define app environment with Dockerfile
2. Define services that make up your app in docker-
compose.yml
3. Run docker-compose up to start and run entire app
20. Running test inside a container
Usual Docker commands available within your test
environment
Run the container with the commands necessary to
execute your tests, e.g.:
docker run web bundle exec rake test
21. Running test against a container
Start a container running in detached mode with an
exposed port serving your app
Run browser tests or other black box tests against the
container, e.g. headless browser tests
24. Amazon EC2 Container Service
• Highly scalable container management service
• Easily manage clusters for any scale
• Flexible container placement
• Integrated with other AWS services
• Extensible
• Amazon ECS concepts
• Cluster and container instances
• Task definition and task
25. AWS Elastic Beanstalk
• Deploy and manage applications without worrying about
the infrastructure
• AWS Elastic Beanstalk manages your database, Elastic
Load Balancing (ELB), Amazon ECS cluster, monitoring
and logging
• Docker support
• Single container (on Amazon EC2)
• Multi container (on Amazon ECS)
26. Amazon ECS CLI
• Easily create Amazon ECS clusters & supporting
resources such as EC2 instances
• Run Docker Compose configuration files on Amazon
ECS
• Available today – http://amzn.to/1jBf45a
27. Configuring the ECS CLI
# Configure the CLI using environment variables
> export AWS_ACCESS_KEY_ID=<my_access_key>
> export AWS_SECRET_ACCESS_KEY=<my_secret_key>
> ecs-cli configure --region us-east-1 --access-key
$AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --cluster
ecs-cli-demo
# Configure the CLI using an existing AWS CLI profile
> ecs-cli configure --region us-west-2 --profile ecs-profile --
cluster ecs-cli-demo
28. Deploy and scale Compose app with ECS CLI
# Deploy a Compose app as a Task or as a Service
> ecs-cli compose up
> ecs-cli compose ps
> ecs-cli compose service create
> ecs-cli compose service start
# Scale a Compose app deployed as a Task or as a Service
> ecs-cli compose scale n
> ecs-cli compose service scale n
30. Continuous delivery to ECS with Jenkins
4. Push image to
Docker registry
2. Build image from
sources 3. Run test on image
1. Code push
triggers build
5. Update Service
6. Pull image
31. Continuous delivery to ECS with Jenkins
Easy Deployment
Developers – Merge into master, done!
Jenkins Build Steps
Trigger via Webhooks, Monitoring, Lambda
Build Docker image via Build and Publish plugin
Push Docker image into Registry
Register Updated Job with ECS API
32. Continuous delivery to ECS with CodePipeline
1. Code push
triggers pipeline
2. Lambda function
creates EC2 instance
3. Image is built and
pushed to ECR
4. Lambda function
terminates EC2 instance
5. Lambda function
deploy new task
revision to ECS
33. Continuous delivery to ECS with CodePipeline
• Lambda custom actions
• Create and terminate EC2 instance
• Update ECS service
• EC2 instance uses user data to build an image and push
it to Amazon ECR
Continuous delivery is a software development practice where code changes are automatically built, tested, and prepared for a release to production.
Expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage.
When continuous delivery is implemented properly, developers will always have a deployment-ready build artifact that has passed through a standardized test process.
With continuous delivery, every code change is built, tested, and then pushed to a non-production testing or staging environment.
There can be multiple, parallel test stages before a production deployment. In the last step, the developer approves the update to production when they are ready. This is different from continuous deployment, where the push to production happens automatically without explicit approval.
Continuous delivery lets developers automate testing beyond just unit tests so they can verify application updates across multiple dimensions before deploying to customers. These tests may include UI testing, load testing, integration testing, API reliability testing, etc. This helps developers more thoroughly validate updates and pre-emptively discover issues. With the cloud, it is easy and cost-effective to automate the creation and replication of multiple environments for testing, which was previously difficult to do on-premises.
I believe you are all familiar with the benefits of using containers, but here’s a quick refresher.
similar to hardware virtualization (like EC2), however instead of partitioning a machine, containers isolate the processes running on a single operating system.
Containers are portable, a container image is consistent and immutable -- no matter where I run it, or when I start it, it’s the same.
Containers start quickly because the operating system is already running, but they also improve the speed of dev process.
Finally, containers are efficient. You can allocate exactly the resources you want – specific cpu, ram, disk, network. Since it shares the same OS kernel & libs, containers use less resources than running the same processes on different virtual machines (different way to get isolation)
That’s great, but how can containers actually help for CD?
Continuous delivery is all about reducing risk and delivering value faster by producing reliable software in short iterations.
That means that your software is deployable throughout its lifecycle,
it means that you can get fast and automated feedback on the production readiness of their software whenever you make some changes, and it means that you can perform push-button deployments of any version of the software to any environment.
Containers reduce the risk of introducing errors as they provide a consistent and predictable environment throughout the software lifecycle and given they are lightweight they can increase speed and agility.
This is what the dev/deployment workflow would typically look like:
Devs write code on their machine and Push changes to Code repository
Push triggers a build, artifacts are build
Test are run, if all green…
New version is deployed in prod
Orchestration tool that is the brain, knows how to move the code/build from one stage to the next one
…
We’ll now dive deep into each stage and explore where and how containers can be used.
The first step of a development process is the source code.
This would be your local development machine.
You write some code, test it locally, make some more changes.
Once you’re happy with your changes you will push them to a code repository.
This can be a distributed system, so multiple devs on the same team can work on the same project.
What tools do we need to achieve this?
When we talk about containers, we refer more and more often to Docker containers. Docker is available for different Linux distro with a recent kernel and on Mac and Windows through Docker toolbox
With Docker we can define the environment our application will be executed in and specify any additional dependency using a Dockerfile.
In this example, we start from a Ruby base image
and install some additional packages using the OS package manager.
We then specify our app specific dependencies using a Gemfile
and finally we copy our source code.
This Dockerfile can now be used to build an image we can use to run our containers,
and we can use the same image throughout the different lifecycle stages.
One of the interesting things about Docker, it’s its growing tools ecosystem.
One of them, Docker Compose, allows you to run complex applications that can include different components.
You simply have to define each component env with a Dockerfile, specify how the components make up your application in a docker-compose yaml file
and finally with a simple command, docker-compse up, you’ll be able to run all the services included in your app.
Here we have a sample docker-compose yaml file with two service:
a proxy and a web app. The proxy service is built from the Dockerfile in the the proxy directory,
it exposes port 80 on the container to port 80 on the host and it’s linked to the web service
(this will allow us to refer to the web service container as ‘web’ from the proxy service container).
The web container is also built from a Dockerfile, in the web directory, it’s a Rails app so we specify the command we want to be executed and it exposes port 3000 to any linked services, not on the host machine.
Now that we made some changes to our code, let’s have a look at the setup we have to build the new artifacts.
At this stage, containers will be used in two ways….
…to provide an execution environment for the build jobs
and as an output of the build process itself.
We’ll see how we can run our builds on an ECS cluster,
but also how to produce container images that can then be used throughout the rest of the workflow.
As we start building more apps, the time taken to execute them will become large enough that we’ll want to distribute their execution across many machines.
ECS can help to distribute build jobs across a cluster.
For example, if you’re using Jenkins, with the Cloudbees Jenkins ECS plugin, you’re able to run your build jobs on an Amazon ECS cluster.
This plugin will simply connect to your ECS cluster, create a new task definition for your job, start a new task, and tear everything down when the job completes.
Containers are also the output of the build stage.
The latest code changes are packaged in a container image and pushed to a repository.
For example, if you’re using Jenkins, the CloudBees Docker Build and Publish plugin is what you could use to build your container images and push them to a Docker Registry.
You simply have to specify the repository name you want to push the image to, a tag for it – in this case we tag it using the Jenkins build number –
and the registry we want to use. In this example we are using…
… Amazon EC2 Container Registry.
Amazon ECR is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository.
Once our build is complete, we are ready to run some tests
You can run tests inside of or against a Docker container.
if you have a lot of unit tests that take a long time to execute, then you may want to run them outside of the container and only do certain integration tests against the built Docker image
We’ll get back to our demo in a short while.
Now that we have the build and test stage cover, all is left to do is actually deploy our new version to our production env. On AWS, we have different options to run Docker containers.
The first I want to mention is Amazon EC2 Container Service. Amazon ECS is a scalable container management service, doesn’t matter if you want to run 10s or 1000s of containers, Amazon ECS will seamlessly scale and provide consistent performance. ECS provides a set of schedulers that can be used to place containers on the cluster, but it also exposes the cluster state through a set of APIs that would allow you to create your own scheduler. ECS is also highly integrated with other AWS Service, e.g. ELB, CloudWatch. Just a quick reminder of some core concepts of ECS: a cluster is a set of resources, container instances are EC2 instances with ECS agent, task definition defines what containers, what resources and task ins an instance of a def.
An easy way to deploy Docker containers within a pipeline is using AWS Elastic Beanstalk. Beanstalk supports a single container deployment directly on an EC2 instance and multi-container deployment on ECS. The benefits of Beanstalk is that it can manage your resources: your DB, ELB, ECS cluster and it also provides monitoring and logging for your app. It’s also easy to set up multiple environments within one application so you can have an integ stack that is similar to your production stack.
Elastic Beanstalk is ideal if you want to leverage the benefits of containers but just want the simplicity of deploying applications from development to production by uploading a container image. You can work with Amazon ECS directly if you want more fine-grained control for custom application architectures.
If are already using Compose, you’ll be glad to hear that we have a tool that will allow you to run your application both locally and on an ECS cluster using the same docker-compse yaml file: the Amazon ECS CLI. With the ECS CLI you’ll be able to run the same Docker compose commands in your local environmetn and up, start, stop, and ps on Amazon ECS. The Amazon ECS CLI is available today for your to download and it’s open source, so we’d like to see you getting involved.
The Continuous Deployment reference architecture (diagram) shows how to use AWS CodePipeline and custom actions with AWS Lambda to create a flexible and scalable deployment pipeline to Amazon EC2 Container Service (Amazon ECS). The deployment pipeline is composed of five stages:
* Source. In this first stage, the latest version of your code is fetched from a repository. This stage has a single action with one output artifact, MyApp.
* LaunchInstance. In this stage, an Amazon EC2 instance is launched. This stage is composed of two actions: the first action, LaunchBuildInstance, uses a Lambda function to launch an EC2 instance and outputs the instance ID. The second action, LaunchNotify, sends a notification to an Amazon SQS queue after the instance is launched.
* BuildAndPush. In this stage, the previously launched EC2 instance is used to build a Docker image and push it to a repository on Amazon EC2 Container Registry (Amazon ECR). This stage is composed of two actions: the first action, BuildAndPush, waits for the EC2 instance user data script to complete. The second action, NotifyBuild, sends a notification to an SQS queue after the image is built and pushed to the ECR repository.
* TerminateInstance. After the Docker image has been built and pushed, the EC2 instance is terminated to avoid further charges. This stage is composed of two actions: the first action, TerminateInstance, uses a Lambda function to terminate the previously launched EC2 instance. The second action, TerminateNotify, sends a notification to an SQS queue once the instance is terminated.
* Deploy. In the last stage of the pipeline, the Docker image is used to roll out an update to an Amazon ECS service. This stage has a single action that uses a Lambda function to update an ECS service with the new container image.
Some of our partners have created integrated CD solutions with Amazon ECS
Shippable is a hosted cloud platform that provides hosted continuous integration, deployment, and testing to GitHub and Bitbucket repositories. you can create and run Amazon ECS services and tasks on your ECS clusters from within the Shippable Formations module. You can also can now pull and push Docker images from Amazon ECR as part of your Shippable CI builds and deploy them with Shippable Formations across multiple clusters in Amazon ECS, without ever having to manually update a Task Definition yourself. Shippable automatically updates your ECS task definitions with the latest image information based on your CI builds and either automatically deploys or deploys with a single-click when you're ready.
Thank you very much for joining us, hopefully you enjoyed this session and you’ll now go home and start enhancing your CD workflow with containers.