The evolution of micro services architecture. Mainframe, Midrange, Client Server, SOA. Best practices of microservices. Load balancing, BigData, design patterns. When and why to use microservices.
40. • For simpler services, a container-less or self-contained service is
the better choice
• But more advanced services may be faster and easier to implement
with the power of an in-container implementation.
41.
42.
43.
44. Communication
• Everything is allowed
• But: You should establish one standard for your platform.
• Principles
• Loose coupling – services should not know about each other
• Smart endpoints, dump pipes
• No intelligence in the communication channel
• No ESB
• REST is a good choice for many scenarios
• Easy consumable with all languages
• Interfaces are maintainable towards compatibility
• URI references are helpful for navigation to different services and abstraction of the
physical location of resources.
45. Communication– further principles
• Asynchronous Messaging
• Reliable event distribution
• High performance
• Load protection of critical services
• Resilience
• Tolerance against failures
• Error recovery
• Avoid error cascades
• APIVersioning
• Don't do it for internal APIs!
46. Testing
• UnitTests
• Integration tests suffice in many cases because the services are small
• Test the isolated service (Other services should be mocked)
• Consumer DrivenTests
• Idea:The integration tests of a service will be defined and implemented by the
consumer (not by the service provider).
• No release before the service passes all consumer's tests
• Test with the real expectations, not with the service specification
• Very smart concept, but hard to maintain
• Has the risk of high test-redundancy for commonAPIs
47. Deployment
• Continuous Delivery
• Create a deployment pipeline
• Need to automate everything
• One monolith may be easy to deploy, 100 Micro Services may not!
• Packaging & Provisioning
• Usage of established standards: DEB, RPM, …
• Robust init scripts
• Configuration management: Puppet, Chef, ...
48. Deployment as platform
• 1 Micro Service : 1 Linux System
• Docker
• LXC based virtualisation
• Similar to changeroot (but a lot better!)
• Slim and fast
• Based on git, so changes of the images can be tracked
• For Hardliners
• Install the Micro Service by shipping and starting the system image
• No packaging
• No init scripts
49. Monitoring
• Realtime metrics
• Monitor, what currently happens
• Fast reaction to problems
• Do monitoring inside the application, not outside
• Tools: Metrics, Spring BootActuator
• Logging
• Manual search in logs of 100 services is not possible
• Central log aggregation
• Filtering and analyses in realtime
• Tools: Logstash, Graylog2, Kibana, Apache Flume, fluentd
52. Load balancing - Traditional Applications
• In a monolithic application, services invoke one another through language-level
method or procedure calls.
• Distributed system deployment
• Services run at fixed, well known locations (hosts and ports)
• Using HTTP/REST or some RPC mechanism to call one another.
53. • A web application frontend client need not know about all the microservice instances
that are available to it?
• An edge service(a microservice serving as a gateway) serves as a gateway to a
microservices infrastructure.
• Each client only communicates directly with just a single edge service.
• There can be one dedicated edge service per client. For example, Netflix serves more
than a thousand device types—and each device type has its own dedicated edge
service that serves as its single entry point.
• Players like Netflix and Riot Games, both of which run on Amazon AWS, utilize Elastic
Load Balancers (ELB) to ensure that their edge services are available at all times
Load balancing - Microservices
54. Load balancing - Take away
• Edge services to handle all inbound traffic.
• Load-balance edge services.
• All internal traffic should be handled by your own tools as this allows you to run
your environment with minimal configuration overhead.
• The most important tool required for effective scaling in microservices is, not
surprisingly, load balancing..
• Players like Netflix and Riot Games, both of which run on Amazon AWS, utilize Elastic
Load Balancers (ELB) to ensure that their edge services are available at all times.
The microservices architecture replaces N monolithic application instances with NxM services instances. If each service runs in its own JVM (or equivalent), which is usually necessary to isolate the instances, then there is the overhead of M times as many JVM runtimes. Moreover, if each service runs on its own VM (e.g. EC2 instance), as is the case at Netflix, the overhead is even higher.