These links provide resources for Kubernetes configuration management using KR8, an open source tool for deploying and managing Kubernetes applications. The GitHub links contain the KR8 project itself as well as examples of using KR8 for cluster configuration and configuration management. The blog posts discuss using KR8 for Kubernetes configuration management and provide more details on using KR8.
Yes, I’ve gone for hipster hand drawn slides
I bought an Apple Pencil so I’ve gotta justify it somehow
Introduction
Mention Apptio
Don’t worry, I’m not going to try wrap this prez. I’m going to tell you a story about how we got here
This is a story about how we discovered we need configuration mgmt for our Kubernetes clusters
Our old deployment system consists of a bunch of java wrappers around some Bash scripts which whet some artifacts
It’s slow.
We wanted to try something else, and Kubernetes was a thing
We built the clusters the way we know how to build stuff - with puppet
We used kubeadm and consul for load balancing
EKS was a twinkle in Amazon’s eye (and it’s still shit)
Using cloud-init didn’t sound like much fun
Everyone saying configuration management isn’t needed hasn’t had their Infoseek team run a Nessus scan
We realized that clusters aren’t very usable out of the box
So we added a bunch of stuff and called it AKP
A better name than EKS fwiw
We generally used helm charts
Helm is amazing, it makes installing default components simple and easy.
However, this is fine unless you need to make a change to a helm chart.
Pull requests can be slow
Helm is insecure
we built another cluster. And another. And we kept having to install the same stuff, but with slightly different values
Eventually one day, our devs came to us and asked why our dns wasn’t updating correctly. Turns out, someone had put the wrong configuration in the ingress controller
We realized we had built snowflakes. There was no repeatability and worse, we had no configurability as we scaled out.
We needed to install these Kubernetes components in a repeatable way
we went back to what we knew, puppet
Used the puppet helm module to install charts
Puppet gave us configurability with hiera
Puppet isn’t cluster aware at all
It ran on the master nodes, all 3 of them
We regularly ran into race conditions where puppet would do strange things
It’s just not designed for this higher level abstraction
These are familiar problems
But we couldn’t understand how nobody else was having this problem
Is everyone happy with running snowflakes?
Does everyone have one cluster?
I did what anyone would do - twitter thread - jk - I wrote a blogpost
We tried a few different things
Ksonnet - we liked this but it was complicated, and seemed focused towards app deployment
We then decided to try templating values.yaml and realized we were just insane
Kapitan - more yaml templating, but this ninja2
Ansible - say no more
Notice the trend here. Why as an industry have we made it acceptable to use templating languages for configuration?
When did this happen? Who’s responsible?
Kubernetes is quite happy to accept JSON and computers are good at generating it
After some frustration, one of my smarter colleagues decided he was going to write something
He named it kr8
It was initially a set of bash scripts
We rewrote it in Go a while ago, but some of the bash scripts still remain
Can take helm charts, pure jsonnet, yaml or json and manipulate it
Creates deployable manifests for each cluster, which you can read and understand and debug
Deplorable with kubectl!
A component is something you want on some or all clusters
They contain a parameters definition, and a taskfile (go-task) and some jsonnet
That jsonnet depends on your component source (helm - patches)
Clusters is a hierarchical directory
Named clusters contain a cluster.jsonnet
You can also have a params.jsonnet which is inherited for clusters below it
So if you have multiple clusters in the hierarchy, it’s easy to supply values for clusters globally
So let’s see this in action
Our industry is changing
Kubernetes and cloud providers are here
Terraform, pulumi are tools in this space
There might be a better way of doing this
This might be the best way, but maybe not?
The people in this room have solved this problem for multiple abstraction layers
That abstraction layer has changed. We don’t just need to configure operating systems anymore, we need to use the concepts we’ve learned and push them into the new layers
Developers don’t get it, I’ve seen this in the wild
There might be a better way of doing this
This might be the best way, but maybe not?
If someone wants to solve this problem for us, please do
If you can write decent Go, we would love some PRs