Containers are an increasingly important way for developers to package and deploy their applications and AWS offers multiple container products to help you deploy, manage, and scale containers in production. In this session we dive deep into Amazon Elastic Container Service for Kubernetes (Amazon EKS), a new managed service for running Kubernetes on AWS. Learn how Amazon EKS works, from provisioning nodes, launching pods, and integrations with AWS services such as Elastic Load Balancing and Auto Scaling.
Learn more about containers here: https://aws.amazon.com/containers/
We’ve learned that customers love containers. Why?
Packaging – simple to think about, easy to model out applications at the component-level, eases the journey to running microservices or 12-factor apps
Distribution – generally the container image, which encapsulates everything you need to run your application, is stored in a small, light-weight image that can be run on nearly any machine in a repeatable way
Immutable Infrastructure – with the packaging and distribution come a simple way to run immutable infrastructure where you can scale up or down based on requirements
Again, our mission, as mentioned at the outset was to make AWS the best place to run ANY containerized application => APPLICATIONS! not infra
The first thing we need to do is introduce Kubernetes. Kubernetes has been around for a few years now, but it has absolutely taken the world by storm, especially over the past 12 months. It’s also rapidly gained traction amongst AWS customers.
So, strip away the hype, and at its core, Kubernetes is an open-source container management platform. It’s built to help you run your containers at scale and comes equipped with features and functions to build proper distributed applications using the 12-factor app pattern.
The first thing we need to do is introduce Kubernetes. Kubernetes has been around for a few years now, but it has absolutely taken the world by storm, especially over the past 12 months. It’s also rapidly gained traction amongst AWS customers.
So, strip away the hype, and at its core, Kubernetes is an open-source container management platform. It’s built to help you run your containers at scale and comes equipped with features and functions to build proper distributed applications using the 12-factor app pattern.
If you’re not a developer yourself and you know about Kubernetes already, chances are you learned about Kubernetes from a developer. This is generally not software someone sells you, but rather it’s something your developers pick up because it helps them solve problems. So, what is it that’s interesting here?
Top5 on Github
The repository has 35k stars, almost 65k commits, 1600 contributors.
Kubernetes can be run anywhere- on premise or in the cloud. Many customers that use Kubernetes today like it precisely for that reason. Either they can make investments on premise now, moving legacy applications into containers, building new apps in a cloud native way, running all of this in Kubernetes and move these applications onto the cloud when they are ready, or they can run the same orchestration framework across multiple environments.
And last but not least, the Kubernetes API can be thought of as a single extensible API that can be used to abstract resources both within AWS and on premise. When using Kubernetes on AWS, you can take advantage of the scale, performance, and breadth of features of the AWS platform via Kubernetes cloud integrations, and use the same familiar Kubernetes API when deploying containers on premise.
At the end of the day, though, all of the functionality packaged together here are the building blocks for microservices. Kubernetes was designed to allow you to build Could Native applications.
And, the quality of the underlying cloud platform – the speed, stability, scalability, and the integrations with the platform, all impact the quality of the applications you build, and how much work you have to do yourself, and ultimately- how happy your customers are. Your customers perceive the performance of your application; how quickly new features are introduced, if your app is down when they need it most.
Customers generally run 3 Kubernetes masters across three availability zones to provide a highly available Kubernetes control plane. Each Kubernetes master runs a copy of the same components.
There are some customers that run single-AZ control planes, as well.
The Kubernetes masters run several components within them- the API Server, which is fairly self explanatory, the controller manager, which runs various system processes for the cluster, and the scheduler, which assigns work to nodes, are some of the main ones. These are the components that allow you to interact with the Kubernetes system.
This is also where add-ons like KubeDNS and the dashboard can run.
In addition to the Kubernetes masters, you also need to run etcd, the core persistence layer for Kubernetes. Etcd is a distributed key value store- this is where the critical data for the cluster is stored. You can optionally co-locate the masters and etcd on the same instances, so you only need to run three instances instead of six to support the control plane. This makes tradeoffs in the operational burden when upgrading your cluster, though. This is one of the many complexities you will encounter when standing up your own kubernetes insfrastructure.
You then need to run the actual worker nodes- this is where your applications run. Worker nodes are generally deployed in autoscaling groups.
Our customers told us, “Hey, running Kubernetes isn’t trivial work, and we think we can better spend our cycles focusing on our applications.” “if we had things our way, we wouldn’t have to think about the nuances of kubernetes deployments or configuration, we wouldn’t have to worry about managing etcd or the masters”
and we want the freedom to choose top notch aws integrations
But also to continue using the open source tooling we’re using today.
We listened, and that’s why we’ve built Elastic Container Service for Kubernetes- or EKS.
We know how important a well-functioning service is to our customers. So we didn't build Amazon EKS haphazardly. There are a core set of tenets that we followed which guided our decision-making for how Amazon EKS should work.
Let’s talk about the tenants that anchor our design decisions for EKS. Tenant 1: EKS is a platform for enterprises to run production-grade workloads. EKS aims to provide features and management capabilities to allow enterprises to run real workloads at real scale. Reliability, visibility, scalability, and ease of management are our priorities.
One of the areas where we are putting in a lot of effort is to availability. By default, EKS is multi-master – we run masters across multiple availability zones and we manage your persistence layer for you.
Tenant 2: EKS provides a native and upstream Kubernetes experience. Any modifications or improvements that we make in our service must be transparent to the Kubernetes end user.
This means that your existing Kubernetes experience and know how applies directly to EKS. Your existing applications and investments in Kubernetes work right out of the box with EKS.
Tenant 3: EKS customers are not forced to use additional AWS services, but if they want to, the integrations are seamless and eliminate undifferentiated heavy lifting.
We are focused on making contributions to projects that allow customers to use the AWS components they currently know and love with their applications in Kubernetes.
The other thing our customers care about is integration into the rest of AWS and this is another area where we plan to focus and contribute upstream.
Tenant 4: The EKS team actively contributes to the Kubernetes project to improve the Kubernetes experience for all AWS customers.
Things necessary to run AWS in a good way. Stability, Backups, etc…
Now, with EKS, the complexity of standing up your own Kubernetes control plane is simplified. Instead of running the Kubernetes control plane in your account, you connect to a managed Kubernetes endpoint in the AWS cloud. This endpoint abstract the complexity of the Kubernetes control plane- your worker nodes can check into a cluster, and you can interact with your Kubernetes cluster through the tooling you already know and love.
Sizing hard to get right, what happens when the cluster grows.
So we Monitor control plane…
https://github.com/heptiolabs/kubernetes-aws-authenticator - this is the project I’ll be talking about
Because we’re hosting Kubernetes as a service, we need to provide authentication on the API endpoint with IAM. IAM isn’t currently supported as a built-in authentication mechanism, so let’s dig into how this works.
When setting up a kubernetes cluster, a cluster admin is to make access and authorization design decisions:
specifically, what mechanisms to use to authenticate the http request made by a user/groups against the API server.
once TLS is established based on the chosen authentication mechanism, whether to authorize the action requested by the user as allowed on the policy associated with the user/group
When a client attempts to authenticate with the API server using a bearer token, the authentication webhook POSTs a JSON-serialized authentication.k8s.io/v1beta1 TokenReview object containing the token to the remote service. Webhook is an authenticating hook for verifying bearer tokens
VPC will span all Azs in the region.
Services and Endpoints, and maintains in-memory lookup structures to serve DNS requests
DNS caching to improve performance
--cluster-dns=<dns-service-ip>
--cluster-domain=<default-local-domain>
A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
NetworkPolicy resources use labels to select pods and define rules which specify what traffic is allowed to the selected pods.
Optionally extends K8s API with more policy capabilities, and host protection (protecting the K8s infrastructure not just pods, and standalone instances not running K8s)
Yes, Calico and amazon-vpc-cni-k8s will work together to provide ingress/egress rules of K8s network policies. You can think of SGs as providing the underlying cluster (VM oriented) security and Kubernetes network policy providing the fine-grained micro-service (container oriented) security.
CNI Plugin networking
HPA – Horizontal Pod Autoscaler, Cluster Autoscaler
We are super excited about the ecosystem of services that we now have enabling AWS to be the best place to run containers securely, at scale, and for production workloads.
We want to keep making it easier for you to run your applications. This means work on ECS, work on EKS, and work with the open source community to make sure that common patterns are easy to run on AWS.
Load Balancer Health Check Initialization [December 2017]
ECS and ECR Available in Asia Pacific (Mumbai) Region [December 2017]
ECS and ECR Available in South America (Sao Paulo) Region [December 2017]
ECS and ECR Available in EU (France) Region [December 2017]
Docker 17.09 Support [December 2017]
Service Discovery for Amazon ECS [January 2018]
ECS and ECR Available in GovCloud (US) Region [January 2018]
AWS Fargate for ECS Available in US East (Ohio) Region [January 2018]
AWS Fargate for ECS Available in EU (Dublin) Region [February 2018]
Cost Allocation for Amazon ECS (including AWS Fargate tasks) and Amazon ECR [February 2018]
AWS Fargate for ECS Available in US West (Oregon) Region [March 2018]
Custom Domains for Amazon ECR [March 2018]
Secret Management for Amazon ECS [March 2018]
Blox Daemon Scheduler [March 2018]
Thank you very much for listening to our session and looking forward to continued feedback from our customers in the coming weeks and months.