This document provides an overview of Concourse, a continuous integration and delivery tool. It discusses Concourse's core concepts of resources, tasks, and jobs which connect to define delivery pipelines. It also covers scaling complex pipelines through techniques like using an artifact store, custom caches, extending with additional resources, and implementing custom resources. The document includes an agenda, challenges with traditional CI systems that Concourse addresses, and a demo of Concourse in action.
3. Continuous Integration
Integrate code in a shared repository
several times a day
Automate building and testing
Run builds in a controlled
environment
Continuous Delivery
Deliver functionality to customers several times a day
Short feedback cycles reduce risk, cost and time
Reduce “software inventory”
Continuous Deployment
Deploy several times a day
Automate the deployment of
software
Infrastructure and Configuration as
Code reduce risk
4. Challenges of “Traditional” CI Systems
● Builds use an ever increasing amount of resources
○ git repos, docker images, credentials, …
● Cloud Native: Build Agents as Pets vs. Cattle
○ Repeatable builds?
○ Clear build step inputs and outputs?
● (Ab?)using CI to orchestrate deployment
○ Modelling “deployment destinations”
● Configuration as Code vs. manually built Pipelines
5. Concourse: Solutions
● Builds use an ever increasing amount of resources
○ git repos, docker images, credentials, …
● Cloud Native: Build Agents as Pets vs. Cattle
○ Repeatable builds?
○ Clear build step inputs and outputs?
● (Ab?)using CI to orchestrate deployment
○ Modelling “deployment destinations”
● Configuration as Code vs. manually built Pipelines
Resources
Jobs and
Pipelines
Tasks and
Workers
6. The Three Core Concepts of Concourse
● Resources
○ any entity that can be checked for new versions
○ Pull at a specific version
○ Push up to idempotently create new versions
● Tasks
○ execution of a script in a container
○ well-defined inputs and outputs via volume mounts
● Jobs
○ connect tasks and resources and can depend on other Jobs
○ Jobs define the shape of your delivery pipeline.
9. Scaling to complex Pipelines
Use an Artifacts Store:
- e.g. S3, Swift, HTTP
- FTP, rsync, …
Use custom Caches:
- e.g. for npm, yarn, mvn, bundler...
- with custom versioning (e.g. last git ref
that changed a yarn.lock file)
- https://www.meshcloud.io/en/2017/05/25/caching-dir
ectories-in-concourse-ci-pipelines/
10. Scaling to complex Pipelines
Extend your Pipeline with Resources
There’s likely already a
Resource for almost anything
● SCM: git, hg, perforce, ...
● Notifications: Slack,
Twitter, Email, HipChat…
● Deploy: Cloud Foundry,
Kubernetes, BOSH
11. Scaling to complex Pipelines
Use Resources to update your Team about Build Status
15. Scaling to complex Pipelines
Implement custom Resources
● Sounds scary, but it’s easy. All you need is
○ A docker image...
○ which supports three executables that consume/return simple
JSON
■ check: check if new versions of a resource are available
■ in: fetch a specific version of a resource
■ out: idempotently push a specific version
● Resources may “nop” operations