DevoxxFR 2024 Reproducible Builds with Apache Maven
Agile integration: Decomposing the monolith
1. AGILE INTEGRATION: Decomposing The Monolith
Strategies for Resilient Microservices Runtimes
Agile Integration Day
Michael Costello
Architect-Emerging Technology Practice (Enterprise Integration)
Red Hat- Red Hat Consulting/NA
3. CONTEXT: THE MONOLITH
The Monolith was a good place to be...
Once upon a time, all our applications would tend to run in
large enterprise application servers.
Large enterprise application servers would often pack
features that allowed us to quickly package applications and
deploy them to allow us to serve up a myriad of application
needs and features such as db connection pools, EJB
interfaces for local and remote procedure calls, as well as
subsystems for security enforcement, advanced messaging
subsystems and whatever your heart could truly desire over
the years of various enterprise specs!
4. CONTEXT: THE MONOLITH
...but change was slow and risk was high.
Over time, our packaging of applications like this would mean:
● Our SDLC would grind to a halt as change sets to our
monoliths required careful wizardry
● Lack of fault isolation would mean that one failure
could cascade into total failure
● Lack of process isolation would allow processes that
constituted little of our available feature set to consume
disproportionate compute resources
● High Availability became nightmarish to ensure
with costly outage often the result
5. DECOMPOSITION FOR SPEED AND AGILITY
As SOA and Microservice Architecture evolved,
we decomposed the beast
A group of smart people in the industry began to see the benefits of
decomposing large monolithic applications so that we could:
● Adhere to SOLID principles across our service offerings - SOLID
principles require us to separate interfaces, adhere to single
responsibility principles and some other pattern goodies
● BRING SDLC Back- we were able to make changes in isolation
and refocus on meeting stakeholder needs
● Fault and Process Isolation - as decomposition emerged,
cascading fault(s) and resource starvation was mitigated
● And Along Came Containers - as opposed to our beastly
vertically scaled monolith days, we were now able to deploy to
compute that met our micro needs but still get all our ‘illities
6. Agile Integration enables Monolith Decomposition
Tenets of Agile Integration
● API first
● Policy enforcement across these sets
of API’s implies resiliency
● Metrics and Monitoring
● Tactical Decomposition via single
responsibility principles, interface
segregation and dependency inversion
● Fuse on OpenShift provides a location
transparent mechanism for service
discovery and service conversation
across the enterprise...leverage FTW!!!
Why we decomposed in the first place
7. EVOLVING DECOMPOSITION
MicroServices Architecture isn’t free
We have A LOT more to manage - services come with their own stacks
and differing rates of change. Dependencies between services form as
services are reliant on others as sources of truth, and functionality.
Managing all of the moving parts involves new requirements.
● Fault Tolerance: We must design the overall system to survive
the failure of an individual component at any time
● Testing Maturity: We must measure the viability of service
offerings, often across invocations
● DevOps Maturity: We must manage independent pipelines and
monitor a dynamic landscape of runtimes
● Network Capacity: Granular decomposition often means
exponentiated growth in wire and service protocols
8. MSA - CROSS CUTTING CONCERNS
How can we observe and manage the whole flock of services?
As our MicroServices architecture evolves, the
single responsibility nature of each individual
service ignores some cross cutting concerns.
● Cascading failures prevention
● Traffic management, and flow control
● Auth/Auz, and policy enforcement
● Distributed Tracing
● Log Aggregation
● Application Monitoring
● Externalized Configuration
9. MICROSERVICE RESILIENCE
Runtime concerns for a highly decomposed system
Dependency graphs, needs for auth/auz and policy
enforcement across service invocations and the need
for resiliency across the cluster implies a need for
approaches to address these cross cutting concerns.
The following patterns are useful:
● Policy Enforcement
● Distributed Tracing
● Traffic Routing
● Flow Control Mechanisms
10. SERVICE MESH
Enabling a Communications Control Plane
“Often used to describe the network of
microservices that make up such applications and
the interactions between them. As a service mesh
grows in size and complexity, it can become harder
to understand and manage. Its requirements can
include discovery, load balancing, failure recovery,
metrics, and monitoring, and often more complex
operational requirements such as A/B testing,
canary releases, rate limiting, access control, and
end-to-end authentication”
https://istio.io/docs/concepts/what-is-istio/overvhttps://istio.io/docs/concepts/wha
t-is-istio/overview.htmliew.html
11. Istio.io is deployed as a side car in containers
leveraging the Envoy proxy to initiate a
communications control plane.
● Capable of leveraging a number of wire protocols
(http2, gRpc, http 1.1)
● Traffic Management - Control the flow of traffic
and API calls between services
● Visibility into dependencies between services
and the flow of traffic between them
● Policy Enforcement - Service Identity and
Security - apply auth/auz between service calls
SERVICE MESH: Istio
A SideCar Pattern leveraging the Envoy Proxy
13. POLICY ENFORCEMENT
Request rate limiting, for example
Rate limiting is used to protect upstream
application servers from being overwhelmed
by too many user requests at the same time.
● security purposes, for example to slow
down brute‑force password‑guessing
attack strategies
● help protect against DDoS attacks by
limiting the incoming request rate
● prevent an upstream service from
overwhelming a downstream service
with unintended traffic
14. ROUTE POLICY ENFORCEMENT
Rate Limiting as an Application Concern
The Throttler Pattern allows you to ensure that a specific endpoint does not get overloaded,
or that we don't exceed an agreed SLA with some external service.
The following example shows a throttling policy applied to a Camel route:
15. RATE LIMIT POLICY VIA PROXY
Rate Limiting as an Infrastructure Concern
Delegate rate limit policy enforcement to a Service Mesh
● Abstract away the details of different policy and
telemetry backend systems
● Move policy decisions out of the app layer and into
configuration instead
● For example, the Mixer component of Istio provides
three core features: precondition checking, quota
management, and telemetry reporting
16. POLICY ENFORCEMENT AT THE EDGES
API Gateway as a first line of defense
3scale gives you a variety of standard options for API authentication
and security, which can be used alone or in combination to issue
credentials and control access:
● Standard API keys
● Application ID and key pair
● OAuth v1.0 and 2.0
3scale’s access control features let you restrict access to specific
endpoints, methods, and services and apply access policy easy for
groups of users.
The 3scale gateway can also enforce rate limits for API usage and
control traffic flow for groups of developers
18. CIRCUIT BREAKING
What is a Circuit Breaker?
“You wrap a protected function call in a circuit
breaker object, which monitors for failures. Once the
failures reach a certain threshold, the circuit breaker
trips, and all further calls to the circuit breaker
return with an error, without the protected call being
made at all. Usually you'll also want some kind of
monitor alert if the circuit breaker trips.”
- Martin Fowler
19. CIRCUIT BREAKING MICROSERVICES
Apache Camel Hystrix EIP
The hystrix EIP provides integration
with Netflix Hystrix to be used as circuit
breaker in the Camel routes.
Hystrix is a latency and fault tolerance
library designed to isolate points of
access to remote systems, services
and 3rd party libraries, stop cascading
failure and enable resilience in
complex distributed systems where
failure is inevitable.
22. DISTRIBUTED TRACING
Observability is key for performance analysis
Container-based applications are often deployed as
several components that work together as a system.
A trace tells the story of a transaction as it propagates
through a distributed system. So, a tracing
implementation must piece together information
about a transaction using data gathered from several
components of a system.
23. OPEN TRACING EVOLUTION
A community-driven open standard for distributed tracing
OpenTracing: by offering consistent, expressive,
vendor-neutral APIs for popular platforms, OpenTracing
makes it easy for developers to add (or switch) tracing
implementation.
● http://opentracing.io
● https://zipkin.io/
● https://www.jaegertracing.io/
● https://github.com/apache/camel/blob/master/co
mponents/camel-opentracing/src/main/docs/open
tracing.adoc
25. DISTRIBUTED TRACING WITH ISTIO
Service Mesh simplifies application instrumentation for tracing
Istio-enabled applications can be configured to
collect trace spans using Zipkin or Jaeger.
Istio automatically send spans, but they need
some hints to tie together the entire trace.
Applications need to propagate the certain
HTTP headers so that when the proxies send
span information to Zipkin or Jaeger, the spans
can be correlated correctly into a single trace.
26. ONE MORE THING:
- Schedule a Discovery Session
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHatNews
- OC RHUG: https://www.meetup.com/Red-Hat-Orange-County-CA/