Scaling continuous localisation with microservices.
Internationalised code is committed to repositories.
Where changes to externalised resource files trigger the start of continuous localisation.
And at any time, localised resources may be committed back to the repository.
The best definition for continuous localisation is:
Autonomous Uninterrupted Localisation
Whereby each localisation process is independent of the other processes.
And these autonomous processes never stop, for anything or anyone.
Continuous localisation can be more than just file pushing.
They can build, test, and perform other autonomous functions, too.
However, large monolithic applications or scripts don’t scale. There are risks:
Monolithic applications are a single point of failure, and they are hard to maintain.
Monolithic applications have gone the way of the dinosaurs.
Additionally, conventional bare metal server clusters have limitations.
More capacity means more servers, which is expensive, and they must be added or removed manually.
We can avoid monolithic applications by using individual microservices.
A microservice is a little bit of functional code.
It wakes up, performs a single task, and then it is destroyed.
They are extremely fast and very cheap to operate.
Microservices have changed the world of DevOps.
We don’t pay for a cluster of servers running 365 days a year.
Instead, we pay only for the few milliseconds a microservice is operating.
Microservices enable rapid, frequent, and reliable delivery of large, complex applications.
This is our microservice architecture.
The microservices (in pink) are distributed across a highly secure hybrid cloud environment.
We us Amazon Web Services and an on-premises Kubernetes orchestrator called OpenShift.
===
Database, state machine, and Jenkins are excluded.
This simulation demonstrates microservices in action.
Instructions and code for running a microservice are maintained inside a Docker container.
Containers are the bright coloured icons in this simulation.
An operational container is green. It turns yellow under stress, and a red container has failed.
We use Kubernetes pods to manage one or more containers.
And the containers are scaled up and down on demand.
Kubernetes pods are self-healing, too.
If a container fails, it is instantly replaced by a new container.
The speed, reliability, and low cost of microservices mean we can confidently localise an unlimited number of products.
And finally, microservices save time and money that can be spent on localising even more products.
We localise nearly four million resource files, today.
So, how far can we push continuous localisation with microservices?
=== By the Numbers ===
400+ unique products and growing.
221 repositories.
3.67m files.
434 branches.
Serving 17,000 developers in a 55,000 strong engineering organisation.
Cost savings are based on the aggregated average estimated cost of human resource time less platform operational costs.
8000 hours at 80 USD - annual operational cost
640,000 - 14,400 = 625,600
1 year = 8760 hours
It is difficult to measure cost of human resource time, we pay for a few virtual machines, SCI, plus AWS/OpenShift resources, easy to view and manage in a dashboard.