2. Disclaimer
TL;DR: My opinions are my own.
The views and opinions expressed in this presentation are those of the author and
do not necessarily reflect the official policy or position Willis Towers Watson.
Examples of analysis performed within this article are only examples. They should
not be utilized in real-world products as they are based only on very limited and
open source information. Assumptions made within the analysis are not reflective of
the position of Willis Towers Watson.
3. Goals
Define DevOps
First cursory glance into day to day
Introductions to CI/CD
Introductions to Linux automation
Get a feel for future hands on workshops
Learning resources
8. My Definition
DevOps is an adoption of a culture revolving around the 12 factor app principle. In a
startup culture, it’s something that the developer should own. At scale, it makes
sense to integrate this into your architecture teams for a top down approach.
26. Docker
Linux
Windows
Utilizes cgroups and namespaces to isolate processes
Shares kernel space with parent machine running the Docker Engine
Stateless containers are best containers
Plenty of orchestration tools
Not a VM
The ops engineer, or system engineer will typically state that it’s a developer who handles the tooling for the IT department. They make sure that the machines are in a consistent state and create tools that allow us to automate the turn up of new systems.
I’ve had developers tell me that a DevOps engineer is someone who handles the build pipeline. Meaning continuous delivery, and continuous delivery. They get my code to the “cloud.” You can quickly see the divide between the two.
And of course, PT belts.
In my experience, DevOps typically originates as a grassroots movement from either the systems or development side. They both have unique challenges at driving adoption from the cooperating department, and for the most part this is due to a technical language barrier. Systems people typically don’t use the same terminology as someone on a development department.
The twelve factor principle is something that came out of heroku, and is a guideline for deploying scalable applications. In development terms, I like to think of it as more of an interface, or contract, between the two departments.
Now, this doesn’t mean everything in one visual studio solution for you C# developers. Front end/Consumer/Bus lives in one codebase for the individual component. While still having separate deployable artifacts.
Keep it living documentation that follows the application, IE a docker-compose file that declares the dependent version, but spins it up separate from the application.
Use a key value (KV) store where you can to pull this data. You should only have to set the “Environment” environment variable. Your code should pull down the right config from it’s KV store (consul/vault)
IE a service bus (dispatcher/consumers) should be treated as an attached service, hence live in the same codebase. “Loosely coupled, highly cohesive”
Your build servers are not your dev/qa environment. There should be parity here, which we will discuss in later slides.
Being stateless allows your application to scale quickly. Some of the things to consider are dealing with user sessions on the front end and how you store the active server sessions (mssql, redis, or otherwise). If you’re bound to state, you’re bound to a failure domain for that application.
Essentially, this is just about making things accessible to other networked services. You also need to consider the trust boundary between the services and firewall policies.
Again going back to the stateless model, if I’m not tied to a stateful service, I can dynamically scale up/down as needed.
This is the most underrated slide. If you have to wait until “off” hours to deploy, then you’re shooting yourself in the foot on your cycle times. If you can’t deploy while users are consuming your application, you need to rethink your session states and concurrency.
If you’re going to have any sort of capacity planning, you need to stress test your dev environment. This will give you a better quality gate when pushing code through to production. Parity also gives you the opportunity to mock any test cases for your prod environment to have the confidence to deploy during peak hours.
Get your devs the ability to see production metrics and exceptions if they don’t have access to production. You should also consider inserting a correlation ID into the log so you can trace your transaction across numerous applications. (Tools: splunk/sumologic/ELK stack)
A general purpose admin box, they should be automated, but something that can be run manually if necessary.
As the first point states, there are plenty to choose from. The most common being Jenkins. There is a principle that I won’t go into too deeply, but keep your build automation version controlled. Meaning that you shouldn’t be dependent on a collection of build steps for your CI. Your build tool should call your semantic versioned build utilities.
Each of these have a different use case. Ansible being one of my favorite just for the fact that you’re dealing with an agentless configuration management source. You can really point it at anything and it will invoke a set of commands based on the yaml config files. CF Engine, Chef, and puppet are all agent based that poll the master for updates. Salt does a different type of polling, I believe they also do some caching, but I can’t speak in detail about that since I haven’t been able to use it in production.
It may seem like a plug for Hashicorp’s pipeline here, but this is something you’ll typically see from the cutting edge ops side. Vagrant for your test environments, packer to roll up the state of a box for an image. Terraform to deploy massive quantities of the image. Nomad to handle some of the auto scaling functionality. And Vault for secret management. With this suite of tools, you can start creating your own “cloud-formation-like” services on prem. Terraform also leverages most of the major cloud providers as a ‘resource’ to define your infrastructure as code
Docker is definitely something that’s hot on the market at the moment. Linux containers and BSD jails have been around for a really long time, it’s now becoming easier to consume. I can’t state this last part enough here, they aren’t truly separated. The difference being that they share the same kernel space with the rest of the containers. Containers can bleed into the host from time to time and cause some noisy neighbor situations, in addition to the potential kernel panic that could cripple the rest of the containers. It’s not all bad, otherwise people wouldn’t be using it. What it does well is give you better parity when it comes to the environment that the application actually executes within. Some people claim it simplifies deployment, but I would argue that the they have the right tooling to make it easy, not docker itself.