How do you continue to ship 50 times a day, when you're constantly hiring more engineers? How can you continue, when every day you write more tests that need to be run on every commit? This talk will cover how to scale up Continuous Integration and Continuous Deployment infrastracture, for teams as small as a handful of engineers and as large as hundreds of engineers.
2. “Continuous deployment involves
deploying early and often, so as to
avoid the pitfalls of "deployment hell".
The practice aims to reduce timely rework and thus
reduce cost and development time.”
8. How do you make tests fast?
• Tests can exercise large amounts of code
without being slow
• Minimize system calls (no I/O, no disk)
• Minimize test data size
• Make sure all systems are cheap to
instantiate/teardown
• No external state makes tests more reliable
9. Run Tests in Parallel
• Multiprocess
• Multimachine
• Multi-VM
• Instant multi-VM: http://circleci.com
10. Hardware Scale
• CI Cluster will get huge
– Function of cumulative engineering man-months
– Rule of thumb: 10% of your cluster size
• You will need a CI/CD DevOps person
– CI cluster monitoring / alerting
– Configuration Management critical
11. Scale testing infrastructure recap
• Write the right kind of tests
• Make those tests as fast as possible
• Run those tests in parallel
12. People / Roles
• Sheriff
– Designated reverter / problem troubleshooter
– Common pattern (IMVU, Chromium, Firefox)
• CD “Product Owner”
– Held accountable for SLA / Performance
– Manage infrastructure backlog
13. Single trunk
• Do this until it doesn’t work for you
• Gets painful in the 16 – 32 developer range
• Faster commit->deploy reduces the pain
– But effort becomes prohibitive
14. “Try” pipeline
• Conceptually, a second tree that “doesn’t
matter” but still gets tested for feedback
• Buildbot implements a patch-pushing version
• Takes a significant amount of pressure off of
trunk builds
15. CI Server takes active role
• Server automatically reverts red commits
• Server merges green commits to trunk
16. Feature branches
• All incremental development happens on
branches, branches land when feature is
“ready”
• If “feature” is kept small, can be 2-3 per
engineer per week on average
• Less continuous, but scales much better
– Feature branches tested before merge
17. Merge tree
• Tree per team / feature
• Trees merged into trunk daily (if green)
• Scale up via tree of trees (of trees…)
• Again, less continuous
18. Federation
• Each team gets their own deploy pipeline
• Requires SOA / component architecture
• Each team can set their own CD pace
• “Enterprise Ready”
19. Recap
• Single trunk + Try pipeline / Autorevert
• Feature Branches
• Merge Tree
• Federation
About me: IMVU, Canvas, Continuous Deployment)Ground rules: I don’t demand your attention, please tweet / follow links while I’m talking. If you have questions, shoot up a hand. If I don’t see you, yell at me.
Continous Deployment vs Continuous Delivery (next slide)
Commit to deploy< 5 minute: stay in flow5 – 15 minutes: can keep working on feature15 minutes: failures are surprises and require expensive rewindingLocal dev loop< 2s: stay in tight flow2s – 10s: tab away from terminal, looser flow10s – 1min: start thinking or coding on next thing, failures require rewinding1m – 5min: significant rewinding, high distraction, painful testing (rolling chair jousting)
Example: All metrics green, everything looks great, but got to metrics by shaming anyone who breaks build. Culture of tip-toeing through the bulid system leads to reduced happiness (and reduced throughput!)Sidenote: Measure throughput! Deploys per engineer (avg/median/extremes), is it scaling up with org or are you getting less deploys?
GUI Testsi.e. SeleniumIntegration Testsi.e. uses the databaseUnit Testssmall fast tests with no external state
i.e. Don’t reload your country code or ipgeo tables, even into memory from disk, on every test
Highly recommend Buildbot when you hit scale. It’s proven at huge scale (500+ nodes) and growing, and allows way better pipeline customization at that scale
Test time is O(Cumulative man-months)Doubling staff means halving test time, in spite of ever-marching-on increase in tests
Github modelMost FOSS is moving to or using this model
Lots of people/process overheadClassical way of scaling up to extremely large teams
Fortune 500-proof.
Remember: Don’t overinvest up front. Better to do something simple until it doesn’t work, than to overbuild.Don’t underbuild either. Use