3. From traditional app to modern app
Existing
Application
APP
Modern
Microservices
Add new services
or start peeling off
services from
monolithic code.
Modern
Methodologies
Implement CI/CD
and automation.
Modern
Infrastructure
Move to the
cloud as VMs
or Containers
or refresh HW.
Containerize
Applications
Re-architect
apps for scale
with containers.
10. How Kubernetes works
1. Kubernetes users communicate
with API server and apply
desired state
2. Master nodes actively enforce
desired state on worker nodes
3. Worker nodes support
communication between
containers
4. Worker nodes support
communication from the
Internet
Kubernetes
control
API server
replication, namespace,
serviceaccounts, etc.
-controller-
manager -scheduler
etcd
Master node
Worker node
kubelet kube-proxy
Docker
Pod Pod
Containers Containers
Worker node
kubelet kube-proxy
Docker
Pod Pod
Containers Containers
Internet
Internet
11. Manage and
operate Kubernetes
with ease
Build on an
enterprise-grade,
secure platform
Accelerate
containerized app
development
Run any
workload
anywhere
Kubernetes on Azure
Portable Extensible Self-healing
Simplify the deployment, management, and operations of Kubernetes
12. Manage Kubernetes with ease
API server
Controller
ManagerScheduler
etcd
Store
Cloud
Controller
Self-managed master node(s)
Customer VMs
App/
workload
definitionUser
Docker
Pods
Docker
Pods
Docker
Pods
Docker
Pods
Docker
Pods
Schedule pods over
private tunnel
Kubernetes
API endpoint
Azure managed control plane
Focus on your containers and code, not the plumbing of them
Responsibilities
DIY with
Kubernetes
Managed
Kubernetes
on Azure
Containerization
Application iteration,
debugging
CI/CD
Provisioning, upgrades,
patches
Reliability availability
Scaling
Monitoring and logging
Customer Microsoft
13. Task The Old Way With Azure
Create a cluster Provision network and VMs
Install dozens of system components including etcd
Create and install certificates
Register agent nodes with control plane
az aks create
Upgrade a cluster Upgrade your master nodes
Cordon/drain and upgrade worker nodes individually
az aks upgrade
Scale a cluster Provision new VMs
Install system components
Register nodes with API server
az aks scale
Azure makes Kubernetes easier
Manage and operate Kubernetes with ease
14. Azure Kubernetes Service (AKS) support for Windows Server Containers
• Lift and shift Windows applications
to run on AKS
• Seamlessly manage Windows and
Linux applications through a single
unified API
• Mix Windows and Linux applications
in the same Kubernetes cluster—with
consistent monitoring experience and
deployment pipelines
Now you can get the best of managed Kubernetes for all your workloads whether they’re in Windows,
Linux, or both
az container create --name helloworld –g cs612aci --image microsoft/aci-helloworld --ip-address public
Kubernetes is open-source orchestration software for deploying, managing, and scaling containers. It is highly portable, extensible and can self-heal.
The fully managed Azure Kubernetes Service (AKS) makes deploying and managing containerized applications easy. It offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. Combined with DevOps practices, AKS helps unite your development and operations teams on a single platform to rapidly build, deliver, and scale applications with confidence.
Running managed Kubernetes on Azure has the following benefits:
Manage K8s with ease: Minimize infrastructure maintenance leveraging managed control plane, automated upgrades and repair, and built-in monitoring. And achieve higher availability and protect applications from datacenter failures using redundancies across availability zones.
Accelerate containerized development: Faster end-to-end development experience and integration with Visual Studio Code, Azure Pipelines and Azure Monitor
Build on enterprise-grade secure foundation: Advanced identity and access management using Azure Active Directory, and dynamic rules enforcement across multiple clusters with Azure Policy.
Run any workload anywhere: From Windows to Linux containers, from public cloud to IoT Edge, use Kubernetes to orchestrate anything running anywhere
A Kubernetes cluster is typically made up of
Master nodes for system components like the API server, etcd store, and scheduler
Agent nodes for user container workloads
Managing the cluster involves:
Monitoring the API server
Ensuring HA/DR for the etcd store
Safely managing upgrades across Kubernetes versions
Safely scaling the cluster in and out
Patching master and agent VM nodes
And on and on…
This is complex, error-prone, and expensive
A managed service like AKS moves those tasks to the cloud provider
When we look at the container continuous worfklows, we see containers and registries are a key concept.
We start with what we call the inner loop. Which is everything you do before you commit code.
From the beginning of your development cycle, you’re building and running your code in containers.
We pull base images from a container registry. Either Docker Hub, or perhaps our private corporate registry.
As we’re happy with our code, we commit the code to a source code repository.
The build system takes our code, a dockerfile that describes the build system and builds the collection of images I need for deployment
The images are pushed to our private registry, with the environment configurations extracted from the image.
When deployment happens, we pull images, add the environment information and push it out to various environments.
In Azure, we have many different container hosting offerings.
From Azure Container Service, which hosts the best of breed open source orchestrators
To Service Fabric, which can host guest containers
Or, Azure Batch, App Services for single container workloads that can scale.
And Azure continues to expand it’s container hosts as containers are becoming the unit of deployment.