4. @wattsteve
Kubernetes Cluster
Kubernetes Architectural Overview
Kubernetes Master Server(s)Kubernetes Master Server(s)
Linux Server(s)
etcd API Server
Controller Manager
Kubernetes NodeKubernetes Node
Linux Server
KubeletDocker
Kubernetes Proxy
Kubernetes NodeKubernetes Node
Linux Server
KubeletDocker
Kubernetes Proxy
Kubernetes NodeKubernetes Node
Linux Server
KubeletDocker
Kubernetes Proxy
Scheduler
5. @wattsteve
Installing Kubernetes
• Hosted Services: Google Compute Engine
• Support for a wide variety of Infrastructure (Azure,
Rackspace, vSphere, AWS)
• Support for several OS’ (RHEL, CentOS, Fedora,
Debian, Ubuntu, Atomic, CoreOS)
• Local but automated (Vagrant/Ansible) * Magic *
• Local but manual (Fedora) * What I use *
https://github.com/GoogleCloudPlatform/kubernetes/blob/mas
ter/docs/getting-started-
guides/fedora/fedora_manual_config.md
7. @wattsteve
Kubernetes MasterKubernetes Master
Kubernetes Node 1Kubernetes Node 1
Kubernetes Cluster
NGINX Pod
NGINX ContainerNGINX ContainerBrowser
Request
Browser
Request
Kubernetes Node 2Kubernetes Node 2
Other Pods
Other ContainersOther Containers
We’ll start by Defining and Deploying a Pod
8. @wattsteve
Kubernetes MasterKubernetes Master
Kubernetes Node 1Kubernetes Node 1
Kubernetes Cluster
NGINX Pod
NGINX ContainerNGINX ContainerBrowser
Request
Browser
Request
Kubernetes Node 2Kubernetes Node 2
NGINX Pod
NGINX Pod
ReplicationController
NGINX Pod
ReplicationController
NGINX ContainerNGINX ContainerBrowser
Request
Browser
Request
You might also want to add a ReplicationController
9. @wattsteve
Kubernetes MasterKubernetes Master
Kubernetes Node 1Kubernetes Node 1
Kubernetes Cluster
NGINX Pod
NGINX ContainerNGINX Container
Browser
Request
Browser
Request
Kubernetes Node 2Kubernetes Node 2
NGINX Pod
NGINX Pod
ReplicationController
NGINX Pod
ReplicationController
NGINX ContainerNGINX Container
NGINX Master ServiceNGINX Master Service
And add a Service to Proxy in front of it
10. @wattsteve
What about Persistence? Lets try out Volumes
Volumes are specified in a Pod and mounted onto a specified path
within a container. There are several kinds of Volumes:
•hostPath (mount a persistent directory provided by host)
•NFS (mount NFS share provided by a 3rd
Party)
Ephemeral
•emptyDir (mount an ephemeral directory provided by host)
File
•GlusterFS Distributed File System (mount an adjacent GlusterFS volume)
•Ceph Distributed File System (mount an adjacent CephFS volume)
Block
•GCEPersistentDisk (mount a GCE Block Device when in GCE)
•Ceph Block (mount an adjacent Ceph Block Device)
•ISCSI Block Devices (mount an adjacent ISCSI Block Device)
11. @wattsteve
Kubernetes MasterKubernetes Master
Kubernetes Node 1Kubernetes Node 1
Kubernetes Cluster
NGINX Pod
NGINX ContainerNGINX Container
Browser
Request
Browser
Request
Kubernetes Node 2Kubernetes Node 2
NGINX Pod
NGINX Pod
ReplicationController
NGINX Pod
ReplicationController
NGINX ContainerNGINX Container
NGINX Master ServiceNGINX Master Service
GlusterFS
Volume
GlusterFS
Volume
For this example, we’re going to use GlusterFS
I feel like there is A LOT of info about Kubernetes but not a whole lot to get you quickly ramped up on the basics.
Just to set expectations. My goal is that everybody leaving this talk will understand:
The basics of what Kubernetes is (you can fork out for other talks on niche topics like etcd)
How to set it up
How to build and deploy a Kubernetes Application
This talk won’t cover how Kubernetes is different from the other projects in the Container Orchestration and Management space
Who here has tried out Docker?
Who here has tried out Kubernetes?
Packaging, Consistency, Portability using the same Image across Dev, Build, Test and Production to easily maintain consistency.
Same Image pulled down from Registry. This solves a lot of complex deployment and migration issues.
[ DEMO ] ssh demo-1; docker images; ls –l /opt/data;
docker run -v /opt/data:/var/www/html/ -d dockerfile/nginx;
docker ps; docker inspect {containerID}| grep "IPAddress”; curl {ipaddress|/hello.html
Docker stop {containerID}
But, we need to re-examine Docker’s Deployment efficacy through the lens of Scale Up and Scale Out Architectures:
Scale Up – Easy: Fairly trivial to launch and manage images on just a few servers.
Scale Out – Hard: Too much to track when you have images across 400+ nodes.
So we clearly lack an elegant solution Orchestrating and Managing clusters of containers.
Kubernetes provides a framework for you to describe your Application and the Images it needs and its dependencies as a Pod file and handles the deployment and availability for you
This is Runtime View. Typical Scale Out Architecture.
Devs and POCs usually just use a single master with multiple nodes.
Kube Master Server –
etcd? Distributed Key/Value Store that the API Server uses to persist configuration information.
Kube API Server? RESTFul and Command Line interface. Handles the Creation of Objects.
Scheduler – ensures Pods are scheduled. Can apply scheduling policies.
Kube Controller Manager? Ensures Pod Replication Policies are implemented consistently.
Kube Node –
What is the Kube Proxy? Network Proxy for Containers and Service Endpoints.
What does the Kubelet do? Provisions the resources in the Pods provided it. Interacts with Docker to launch containers.
Kubernetes is written in GO, but you don’t have to know anything about GO to use it.
[ Show Github Community Docs – get folks comfortable with navigating the project ]
2 Config Files on the Master cat /etc/kubernetes/apiserver/etc/kubernetes/config
Demo start-master, stop-master on Master
2 Config Files on each Nodecat /etc/kubernetes/kubelet /etc/kubernetes/config
Demo start-node, stop-node on Node
Demo kubectl get nodes
So it sounds good in theory, lets check it out in practice. We’re going to build a scale out containerized web application using NGINX.
We are going to build a layered application. So we’ll start small and then build up.
We are going to build a Pod to achieve what we see in this picture, which is …(Mgmt Layer + App Layer)
Pods describe a tightly coupled group of containers that typically need to be deployed on the same server because they share resources.
Pods are deployed on Kubernetes Nodes.
Pods are described in Pod Files which also articulate what shared resources the containers in the Pod require.
An example of a Shared Resource is a Volume.
Demo You can pretty much do every thing with the kubectl command line:
Ssh demo-1; cat /opt/data/hello.html
Ssh demo-master
Create the Podcat nginx-pod.yaml
Submit the Podkubectl create -f nginx-host.yaml
View Pod Status kubectl get pods
Delete Pods kubectl delete pods {MyPodName}
Ssh demo-1; curl 172.17.0.11/test/hello.html
Affinity you can also ensure Pod Affinity using labels
kubectl label nodes test-node Disks=SSD Then add NodeSelector with Key and Value in your Pod File
Used to ensure that there are always a certain amount of Kube Nodes running a particular pod. Even if its just one (Singleton).
ReplicationControllers are described in Files that also include the Pod definition. No need for a separate Pod file.
# kubectl create –f nginx-rc.yaml
# kubectl get rc
# kubectl get pods
# ssh demo-1; curl IP/test/hello.html
# kubectl delete rc {MyReplicationControllerName}
Kubernetes uses Labels to organize its assets.
Replication Controllers uses these labels to loosely couple co-operating pods
You can turn up/down the replicas using the following command: kubectl resize --replicas=1 rc nginxrep
When using ReplicationControllers you have the same Pod running on multiple hosts.
A Kubernetes “Service” is a Load Balancer that will proxy inbound connections and route requests round-robin between the containers launched by the Pod.
Kubectl create –f service.json
kubectl label services nginxrep version=1.0
Great Demo of the Declarative Power of Kubernetes.
We are going to mount network storage (GlusterFS Distributed File) into the containers on the web serving directory
This provides central storage for all the web servers. Change the files in just one place.
You could also do this collocated on top of GlusterFS directly to improve performance.
Show/Create the Endpoints:kubectl create –f gluster-endpoints.json
Show/Create the RC:kubectl create –f nginx-rc-gluster.json
Show RC and Podskubectl get rc/pods
Show Mounts Createdssh demo-1; mount
Show Web Servingssh demo-1; curl {IPAddress}/test/hello.html
Change the Data/Re-Show