Adopting new development approaches such as containerization is a big change for traditional enterprise environments. Ancestry, the global leader in family history and consumer genomics, has been a big data company long before the term existed with billions of historical records and millions of family trees, much of which ran in a traditional IT environment. With a new flood of genomic data from its AncestryDNA test and the desire to continue to increase the speed of innovation, Ancestry adopted containerization and micro services using Kubernetes orchestration APIs. This session will describe Ancestry's journey to containerization and how a coherent and consistent API set such as Kubernetes can aid companies looking to make a similar transition. Paul MacKay, one of Ancestry’s Software Architects, will discuss what the company has learned during the past few years of development from both a technical and cultural change perspective.
Brian Ketelsen - Microservices in Go using Micro - Codemotion Milan 2017
Similar a 2017 Microservices Practitioner Virtual Summit: Ancestry's Journey towards Microservices, Containerization, and Kubernetes - Paul MacKay, Ancestry
Similar a 2017 Microservices Practitioner Virtual Summit: Ancestry's Journey towards Microservices, Containerization, and Kubernetes - Paul MacKay, Ancestry (20)
4. We’re a science and technology company with a very
human mission.
5. Data drives our business
• 20 billion historical records
• 90 million family trees
• 10 billion profiles
• 175 million shareable photos, documents and written stories
• 9 petabytes of data
• 4 million members in the AncestryDNA® network
• 37 million 3rd cousin or closer matches
6. Technologies
Microsoft Windows ®
C# .NET ®
SQL Server®, IIS, MSMQ, TFS, etc.
Java, Node.js, Python running on Linux
Private data center
1,000s of servers/VMs running 100s of services
REST-based macro to micro size services
8. How Our Journey Started to Change
Began experimenting with Docker.
Docker Compose
Created a “Docker agent” for remote deployment.
Demonstrated how easy it is to deploy and scale up services.
Deployment times drastically reduced from current methods.
Easier to deploy services of any size (macro to micro)
Showed greater density using current computing resources.
Created and deployed our own micro services using Docker.
10. Adopting New Technologies is Hard
Developers are comfortable with how things are done today.
We think it is faster doing it the “old” way.
It is hard to see the advantages of changing to something new.
Change has real cost.
Change takes time away from developing new features.
Change is disruptive to schedules.
11. Early Discoveries
Many opinions about the appropriate size of a service.
Normal Linux distros are just too big.
Not specifically built for Docker.
Too large of a footprint.
Too many packages to keep updated.
Docker is best supported on newer Linux kernels.
Need to train Windows developers Linux concepts/tools.
The size of a service cannot be dictated
Container orchestration is hard to do it right.
12. Adopting New Technologies or Paradigms
Understand current technologies, processes and paradigms.
Need a “patron”.
Own something “to be real”.
Create a partnership with pilot teams and be agile.
13. Determining the Size of a Service
Be pragmatic; do not break up a service just to break up a service.
Remember the cost of managing many services.
Network latencies
Many things to worry about (e.g. monitoring, coordinated deployments, scaling)
Ask, “will this really be used independently by other services?”
Does it make sense for the service to exist by itself
Be pragmatic, not dogmatic.
14. Linux Built for Containers
Running containers is a first-class citizen
Updates are holistic
Can be automatically pushed to machines.
Can easily revert back to the previous version.
Less is more
Fewer packages means fewer vulnerabilities
Infrequent need for direct access to the machines
15. Kubernetes to the Rescue
Created a small ”sandbox” cluster.
Gathered “committed” pilot teams.
Daily standups
Address problems/concerns early
Provided Docker and Kubernetes training
Developed templates and scripts
17. Conventions/Standards
Developed deployment standards
Namespace for each service
Naming conventions (functionalGroup-serviceName)
One container per pod
Start with wide privileges and narrow as needed
Allow deployment all the way to production
Secrets are controlled by operations/security
Separate clusters for each environment (dev, stage, prod)
Use intra-cluster DNS for micro-services to reduce network latencies
19. Quick Start Tools
Created tool to help teams quickly deploy
Works across all cluster environments
Provide ”best practices” and conventions
Transparent – can generate standard resource files
Created scripts to insert secrets into namespaces
Labels are used to version secrets
Cluster backup/restore scripts
Scripts to easily create clusters in various environments
Allow easy deployment of any size of service
20. Our Journey So Far
Several clusters
Private data center and in the cloud
Hundreds of namespaces and services
Hundreds of pods
Macro to micro size services
Live production traffic
e.g. “We’re Related App”
Made up of 14 micro services
Easiest deployment path for developers
21. The Power of Kubernetes
Programmers have REPL (Read-Eval-Print-Loop)
Kubernetes now gives us CDEL (Compile-Deploy-Execute-Loop)