The document discusses using Kubernetes as an orchestrator for A10 Lightning Controller. Some key points:
1) Kubernetes allows for automatic recovery of pods on failure, easy rolling upgrades of code, and automated scaling of microservices.
2) Using Kubernetes allows the controller to be deployed on-premise and scaled across multiple VMs, with automated launching and scaling. Installation is also now independent of the underlying infrastructure.
3) The journey involved moving from a manual deployment to a Kubernetes deployment, which simplified overlay networking, environment variable passing, and simplified adding/replacing nodes.
4. Why we thought of Kubernetes
• On failure, K8s brings up the pod automatically
• Rolling upgrade of code can be done easily
• Scaling policy can be setup to scale each micro service as needed
• Pod health can be monitored easily and acted upon
5. What we achieved at high level
• Controller was only available as SaaS
• Launch and Scaling was manual
• Installation was dependent on
underlying infrastructure platform
• Controller is available for on-premise
• It can be scaled from One VM to
Multiple depending on use case
• Launch and Scaling is automated
• Installation is independent of underlying
infrastructure platform
From AWS VMs to K8s Containers in Multiple Environments
6. Current Environment for Controller
• Kubernetes core components
• Kube-dns – Internal DNS service
• Flannel – Overlay networking
• Heapster – Monitoring of pods
• Kubernetes Dashboard - Helps monitoring the pods
• jq – Programmatically Editing JSONs for K8s objects
7. The Journey: From to
• Everything was manual to start with
• Selecting Master and Minion
• Mapping node port to container port
• Cross node communication Configuration
• Limitations Realized
• Cant run same type of pod on one node
• Packaging and distribution issues e.g. build process automation
• Data loss when node stops
8. The Journey: From to
• Second Level Issues – After some level of simplifications
• Cumbersome overlay network configuration
• Passing env info to pod – Startup script env variables are not scalable
• Installation was still too many steps
• Thought for Future – Solved now
• Adding node to the K8s cluster when more capacity is needed
• Migrating static IP of the node to other node when node is replaced
• Adding component in future with minimal change in existing components
9. Design Choices
• Keep all micro-services as is
• One K8s service per micro-service
• One pod per k8s deployment
• Multiple services exposed externally
• Continue to use third-party registry service
• Kubernetes Registry Service can be used instead of third-party
10. Accessing Micro Services
• Multiple micro services of Controller are required to be accessed from
outside
• Micro services accessing each other also can’t depend on IP address
• Kubernetes Services and kube-dns allow fixing name as well as a fixed IP
address for each service
• All internal access (between components) is using service name
• Service IP is mapped to Node IP for all external access
• Public static IP is assigned to the node for external access
11. Simplifying Networking
• Each pod gets the IP address that is internal to the node
• Overlay networking facilitates communication between pods across nodes
• Flannel creates an overlay network that spans across nodes
• Each pod gets IP address from same subnet
• This subnet is internal to the K8s cluster
• This provides seamless communication between pods across nodes
• Private Subnet for Service IPs is configured in K8s configuration
13. Persisting Data
• Pods may come and go or can spawn across nodes
• Persistence is required for maintaining the state across reboots or across
clusters
• NFS, AWS EBS, GCE Persistent Disk or Azure Disk can be used as K8s
Persistent Volume (PV)
• In K8s Deployment object, ‘PV Claims’ can be done by each Pod, as needed
• K8s provides PV matching the Claim to the Pod
• This mounts the PV file system into container’s file system
15. Deploying Clustered Applications
• Cluster application (e.g. datastores) each pod need to know about other pod
running same application
• Such applications needs to be deployed using K8s Stateful Set
• K8s Stateful Set provide fixed names for each instance/pod
• PV Claims in each instance of Stateful Set also have fixed names
• Having fix names help a lot in the configuration and functioning of clustered
applications
• When the application requires more capacity, it is easy to add
16. We do many exciting things
You can join the team
mshah@a10networks.com
amathur@a10networks.com
Thanks
Notas del editor
External access
Least expensive and can be used across clouds
Drawback is node monitoring