Give a brief story of how a deceptively simple change can end up being a tangle of dependencies that spiral out of control.
Introduce the example at previous company where different implementations of uploading a document were in place, one hosted via a web service endpoint, the other orchestrating the classes explicitly within a command line application.
Walk through the cascading breakages the changes to the document upload code caused
Throw it away
Name the Mikado method
Published in 2014
Furniture moving analogy
BENEFITS
Visualize what you are trying to achieve
plan and focus improvements over several iterations
Collaboration / concurrent development
Deceptively simple
4 basic concepts that make up the principal.
Set a goal… Then experiment…. Visualize your findings and then UNDO any changes if things broke
Think about what you want for the future and the code that needs to change
Write it down
The goal is
Starting point for change
The endpoint, or success criteria of the the change
Naive Approach
Analyzing code can spend hours what the compiler and tests might tell you in a minute
Typical experiment might be to move a method from one class to another, reduce the scope of a variable or to extract a class
In the SQLite database example it might be just changing all of the code to conditionally read from SQLite classes
The experiment will usually end up in several things that need to change first in isolation before the current goal can be achieved.
These are called the prerequisites
When do you stop experimenting and visualize the prerequisites? We will cover that later...
If your system is now broken...
You have visualized what needs to change first to avoid this.
Undo all the changes. GIT RESET HARD
You are not throwing away work - you have learned new information about your system
Repeatability and predictability trump activity.
Move down to the next layer of prerequisites
At some point, you will find a prerequisite that does not break the system, and you can now commit this and move to another prerequisite
Experimentation can only be meaningful if the code is in a known working state first AND can easily be verified to be in a known working state.
This means a good test suite with good assert coverage...
… and leaning on your compiler… statically compiled languages have a large advantage. Sorry.
To be a successful developer it is essential to know how to morph systems safely into new shapes.
Ever tried to implement a new feature but constantly end up battling the system?
Ever tried to clean up some code or do some big refactorings that you couldn’t get working and had to throw it all away?
Maybe you just have a system that makes you feel like this every day?
One common way to achieve this for complex changes that take several weeks or months to complete is to start a “refactoring” branch …
This is done with the goal of keeping the “breaking” changes out of the shipping product until they are finished…
But, after those weeks and months pass, the “refactoring” branch usually ends up looking a bit like this…
and you know there is a painful merge coming up..
Because changes are small, they are compiled and released without having a broken codebase for an extended period of time
The Mikado Method
helps to uncover a non-destructive path within your regular development flow
keeps everyone on track with the goal, even if it takes several months
So, how did our team get from using logstash to write strings to a file and email exceptions, to having searchable logs with structured data in ELK ?
Naive approach - remove all references to Log4Net, add Serilog, fix logging statements, add sinks, configure sinks test sinks…… oh dear
Wrote down findings … let’s try to replace Log4Net in a single project at a time… Undo all of our changes, and try to replace Log4Net in the domain assembly…
Uh oh, the domain assembly is used by the WebService and the QueueListener, so both of those prerequisites are actually prerequisites of the Domain assembly..
Lets redraw our graph...
The data replicator seems like an easy target since it does not use any of the domain assembly… lets try and replace Log4Net in there with Serilog …
But… the logging signatures don’t quite match .. it might be easier if we first create our own ILogger with some signatures that will be compatible with serilog and log4net and write a Log4Net adapter first..
The new custom ILogger and Log4Net adapter worked… the commits and pull request were merged and released to production !
Now we tried to implement the replace log4net in DataReplicator goal… but we discovered that it would be easier to visualize the prequisite being that we create an ILogger adapter for Serilog… this allowed us to experiment with sink configuration and other things while we were creating the adapter and getting it to work…
Commit and rollout to production ...
During the discovery while implementing the serilog adapter, we decided that having another prerequisite to update our octopus deploy config transforms with the serilog sink configuration would be a nice small chunk of work… so we did that too..
Replace the implementation in Data Replicator… Done...
DataReplicator done… rolled out to production
Goal still difficult to implement… we needed structured logging, so far we had just replaced Log4Net with Serilog but were still just logging strings to files and email…
We also needed to get the Redis sink working …
Decided to work on the strings to structured part first...
So let’s add a goal to convert all strings to structured logs… started with the QueueListener first…..
and then found we couldn’t see our structured log information? Why? Files and Email sinks don’t support displaying the fields from structured logs..
So ….
So we add another goal to view structured logs locally on our machines… The SEQ application allowed us a quick way to collate and display our structured logs on our dev boxes
Convert the remaining string log events to structured log events
So, after all that work, all those commits and releases to production of our code, the system still working at each stage … the last thing we had to do for our original goal was to pull down the nuget package for the Redis serilog sink .. add some configuration… and release to production ..
Incremental changes that are merged to master often
Can allow concurrent development on a large refactoring
Transparent and goal oriented refactorings
Tackle large complex refactorings by breaking apart into smaller chunks