SlideShare una empresa de Scribd logo
1 de 12
Descargar para leer sin conexión
Cognizant 20-20 Insights | July 2017
Building a High
Performance
Reactive
Microservices
Architecture
As digital rapidly reshapes
every aspect of business, IT
organizations need to adopt
new tools, techniques and
methodologies. Reactive
microservices are fast emerging
as a compelling viable alternative.
Executive Summary
Growing demand for real-time data from mobile
and connected devices is generating massive pro-
cessing loads on enterprise systems, increasing
both infrastructure and maintenance costs. Digital
business is overturning conventional operating
models, requiring IT organizations to quickly mod-
ernize their application architectures and update
infrastructure strategies to meet ever-growing and
ever-changing business demands.
To meet these elevated demands, the logical step
is to re-architect applications from collaborating
components in a monolithic setting to discreet
and modular services that interact remotely with
one another. This has led to the emergence and
adoption of microservices architectures.
Microservices-based application architectures
are composed of small, independently versioned
and scalable, functionally-focused services that
communicate with each other using standard
protocols with well-defined interfaces. Blocking
synchronous interaction between microservices,
which is today’s standard approach, may not be
the optimum option for microservices invocation.
COGNIZANT 20-20 INSIGHTS
2High Performance Reactive Microservices Architecture |
One of the key characteristics of the microservices
architecture is technology diversity – and support
for it. This extends to the programming languages
in which individual services are written, including
their approach toward implementing the capa-
bilities embodied by microservices components
within the reference architecture and the commu-
nication pattern between these components.
Systems or applications built on reactive principles
are flexible, loosely-coupled, reliable and scalable.
They are tolerant to failures and even when fail-
ures occur, they respond elegantly rather than with
a catastrophic crash. Reactive systems are highly
responsive, offering users a great user experience.
We believe that implementation of microservices
architecture using reactive applications is a viable
approach and offers unique possibilities. This white
paper discusses aspects of reactive microservices
architecture implementation and offers insights
into how it helps to mitigate the growing demand
for building scalable and resilient systems. It also
illustrates a typical use case for microservices
implementation, compares the performance
of nonreactive and reactive implementation
alternatives and, finally, concludes with a
comparisonthatrevealshowreactivemicroservices
perform better than nonreactive microservices.
DEFINING MICROSERVICES
ARCHITECTURE
Microservices architecture is an architectural style
for developing software applications as a suite of
small, autonomous services – optionally deployed
on different tech stacks, and written in different
programming languages – that work together run-
ning in its own specific process. The architecture
communicates using a lightweight communication
protocol, usually either HTTP request-response
with resource APIs or light-weight messaging.
Microservices architectural styles1
are best under-
stoodincomparisontothemonolithicarchitectural
style, an approach to application development
where an entire application is deployed and scaled
as a single deployable unit. In monoliths, business
logics are packaged in a single bundle and they
are run as a single process. These applications are
scaled by running multiple instances horizontally.2
Reactive Systems
Reactive systems are flexible, loosely-coupled
and scalable. This makes them easier to develop
and amenable to change. They are significantly
more tolerant of failures and, as mentioned
above, they respond elegantly even when fail-
ures happen, rather than catastrophically failing.
Reactive systems are highly responsive, offering
users a great user experience.
The Reactive Manifesto3
outlines characteristics
of reactive systems based on four core principles:
•	 Responsive: Responds in a timely manner,
provides rapid and consistent response, estab-
lishes reliable upper bounds and delivers a
consistent quality of service.
1
Reactive systems are flexible, loosely-coupled
and scalable. This makes them easier to
develop and amenable to change.
3High Performance Reactive Microservices Architecture |
•	 Resilient: Is responsive in the face of failure.
Resilience is achieved though replication, con-
tainment, isolation and delegation.
•	 Elastic: Is responsive under varying work-
loads and can react to changes in request
load by increasing or decreasing resources
allocated to service these requests.
•	 Message-driven: Relies on asynchronous
message-passing to establish a boundary
between components that ensures loose cou-
pling, isolation and location transparency.
Reactive Microservices
Implementing microservices with reactive prin-
ciples offers unique possibilities whereby each
component of the microservices reference
architecture, including the microservices com-
ponents, are individually developed, released,
deployed, scaled, updated and retired. It also
infuses the required resilience into the system
to avoid failure cascading, which keeps the
system responsive even when failures occur.
Moreover, the asynchronous communication
facilitated by reactive systems copes with the
interaction challenges as well as the load varia-
tions of modern systems.
Reactive microservices can be implemented
using frameworks such as Node.js,4
Qbit,5
Spring
Cloud Netflix,6
etc. But we propose Vert.x7
as the
foundational toolkit for building reactive micro-
services.
IMPLEMENTING REACTIVE
MICROSERVICES ARCHITECTURE
VIA VERT.X
Vert.x is a toolkit to build distributed and reac-
tive applications on the top of the Java Virtual
Machine using an asynchronous nonblocking
development model. Vert.x is designed to imple-
ment an Event Bus – a lightweight message broker
– which enables different parts of the application
to communicate in a nonblocking and thread-safe
manner. Parts of it were modeled after the Actor
methodology exhibited by Erlang8
and Akka.9
Vert.x is also designed to take full advantage of
today’s multi-core processors and highly concur-
rent programming demands.
As a toolkit, Vert.x can be used in many contexts
– as a stand-alone application or embedded in
another application. Figure 1 (next page) details
the key features of Vert.x, revealing why it is ide-
ally suited for microservices architecture.
Vert.x for Microservices Architecture
Implementation
Vert.x offers various component to build reactive
microservices-based applications. Getting there
requires understanding of a few key Vert.x
constructs:
•	 A Vert.x Verticle is a logical unit of code that
can be deployed on a Vert.x instance. It is com-
parable to actors in the Actor Model. A typical
Vert.x application will be composed of many
Cognizant 20-20 Insights
Cognizant 20-20 Insights
4High Performance Reactive Microservices Architecture |
Verticle instances in each Vert.x instance. The
Verticles communicate with each other by
sending messages over the Event Bus.
•	 A Vert.x Event Bus is a lightweight distributed
messaging system that allows different parts
of applications, or different applications and
services (written in different languages) to
communicate with each other in a loosely
coupled way. The Event Bus supports
publish-subscribe messaging, point-to-point
messaging and request-response messaging.
Verticles can send and listen to addresses
on the Event Bus. An address is similar to a
named channel. When a message is sent to a
given address, all Verticles that listen on that
address receive the message.
Figure 2 summarizes the key microservices
requirements, the features provided by Vert.x to
implement these requirements, and how they fit
into our reference implementation.
Vert.x Key Features
Mapping Microservices Requirements to Vert.x Feature & Fitment with Our
Reference Architecture
FEATURE DESCRIPTION
Lightweight
Vert.x core is small (around 650kB in size) and lightweight in terms of distribution and runtime
footprint. It can be entirely embeddable in existing applications.
Scalable
Vert.x can scale both horizontally and vertically. Vert.x can form clusters through Hazelcast10
or
JGroups.11
Vert.x is capable of using all the CPUs of the machine, or cores in the CPU.
Polyglot Vert.x can execute Java, JavaScript, Ruby, Python and Groovy. Vert.x components can seamlessly
talk to each other through an event bus written in different languages.
Fast, Event-Driven
and Non-blocking
None of the Vert.x APIs block threads; hence an application using Vert.x can handle a great deal
of concurrency via a small number of threads. Vert.x provides specialized worker threads to
handle blocking calls.
Modular Vert.x runtime is divided into multiple components. The only components that can be used are
those required and applicable for a particular implementation.
Unopinionated Vert.x is not a restrictive framework or container and it does not advocate a correct way to write an
application.Instead,Vert.xprovidesdifferentmodulesandletsdeveloperscreatetheirownapplications.
SERVICE COMMUNICATION
Required
Feature
Microservices interact with other microservices and exchange data with various operational and
governance components in the ecosystem. Externally consumable services need to expose an
API for consumption by authorized consumers. For externally consumable microservices, an edge
server is required that all external traffic must pass through. The edge server can reuse the dynamic
routing and load balancing capabilities based on the service discovery component described above.
Internal communication between microservices and operational components or even between the
microservices themselves may be over synch or asynchronous message exchange patterns.
Vert.x
Support
Reference
Implementation
Vert.x-Web may be used to implement server-side web applications, RESTful HTTP microservices,
real-time (server push) web applications, etc. These features can be leveraged to create an edge
server without introducing an external component to fulfill edge server requirements. Vert.x
supports Async RPC on the (clustered) Event Bus. Since Vert.x can expose a service on the Event
Bus through an address, the service methods can be called through Async RPC.
We have used the Vert.x-Web module to write our own edge server. The edge server may addition-
ally implement API gateway features for API management and governance. We have leveraged
the Async RPC feature to implement inter-service communication and for data exchange between
microservices and operational components.
Figure 1
Figure 2 (continued on following page)
Cognizant 20-20 Insights
5High Performance Reactive Microservices Architecture |
SERVICE SELF-REGISTRATION AND SERVICE DISCOVERY
LOCATION TRANSPARENCY
LOAD BALANCING
CENTRALIZED CONFIGURATION MANAGEMENT
Required
Feature
Required
Feature
Required
Feature
Required
Feature
Tracking host and ports data of services with a fewer number of services is simple due to the
lower services count and consequently lower change rate. However, a large number of modular
microservices deployed independently of each other is a significantly more complex system
landscape as these services will come and go on a continuous basis. Such rapid microservices
configuration changes are hard to manage manually. Instead of manually tracking the deployed
microservices and their hosts and ports information, we need service discovery functionality that
enables microservices, through its API, to self-register to a central service registry on start-up. Every
microservices uses the registry to discover dependent services.
In a dynamic system landscape, where new services are added, new instances of existing services
are provisioned and existing services are decommissioned or deprecated at a rapid rate. Service
consumers need to be constantly aware of these deployment changes and service routing
information which determines the physical location of a microservice at any given point in time.
Manually updating the consumers with this information is time-consuming and error-prone. Given
a service discovery function, routing components can use the discovery API to look up where the
requested microservice is deployed and load balancing components can decide on which instance
to route the request to if multiple instances are deployed for the requested service.
Given the service discovery function, and assuming that multiple instances of microservices are
concurrently running, routing components can use the discovery API to look up where the requested
microservices are deployed and load balancing components can decide which instance of the micro-
service to route the request to.
With large numbers of microservices deployed on multiple instances on multiple servers, application-
specific configuration becomes difficult to manage. Instead of a local configuration per deployed unit
(i.e., microservice), centralized management of configuration is desirable from an operational standpoint.
Vert.x
Support
Vert.x
Support
Vert.x
Support
Vert.x
Support
Reference
Implementation
Reference
Implementation
Reference
Implementation
Reference
Implementation
Vert.x provides a service discovery component to publish and discover various resources. Vert.x
provides support for several service types including Event Bus services (service proxies), HTTP
endpoints, message sources and data sources. Vert.x also supports service discovery through
Kubernetes,12
Docker Links,13
Consul14
and Redis15
back-end.
Vert.x exposes services on the Event Bus through an address. This feature provides location
transparency. Any client systems with access to the Event Bus can call the service by name and
be routed to the appropriate available instance.
Multiple instances of the same service deployed in a Vert.x cluster refer to the same address. Calls
to the service therefore are automatically distributed among the instances. This is a powerful
feature that provides built-in load balancing for service calls.
Vert.x does not ship with a centralized configuration management server, like Netflix Archaius.17
We can leverage distributed maps to keep centralized configuration information for the different
microservices. Centralized information updates can be propagated to the consumer applications
as Event Bus messages.
During start-up, microservices register themselves in a service registry that we have implemented
using a distributed Hazelcast Map,16
accessible through the Service Discovery component. In order
to get high throughput, our implementation includes a client side caching for service proxies,
thereby minimizing the latency added due to service discovery for each client-side request.
We have exposed our microservices as Event Bus addresses whose methods can be called using
Asynchronous RPC. This has a positive impact on the performance during inter-microservice
communications as compared to synchronous HTTP/S calls between microservices.
We have exposed our microservices as Event Bus addresses whose methods can be called using
Asynchronous RPC. Load balancing is automatically achieved when multiple instances of the same
service are concurrently running on different instances of the cluster.
We have leveraged Hazelcast Distributed Map18
to maintain the microservice application-related
property and parameter values at one place. Any update to the map sends update events to
applicable clients through the Event Bus.
(Continued from previous page)
Figure 2 (continued on following page)
Cognizant 20-20 Insights
6High Performance Reactive Microservices Architecture |
HIGH AVAILABILITY, FAIL-OVER AND FAILURE MANAGEMENT
MONITORING AND MANAGEMENT
CALL TRACING
SERVICE SECURITY
Required
Feature
Required
Feature
Required
Feature
Required
Feature
Since the microservices are interconnected with each other, chains of failure in the system
landscape must be avoided. If a microservice that a number of other microservices depends
on fails, the dependent microservices may also start to fail, and so on. If not handled properly,
large parts of the system landscape may be affected by a single failing microservice, resulting
in a fragile system landscape.
An appropriate monitoring and management tool kit is needed to keep track of the state of the
microservice applications and the nodes on which they are deployed. With a large number of
microservices, there are more potential failure points. Centralized analysis of the logs of the
individual microservices, health monitoring of the services and the virtual machines on which they
are hosted are all key to ensuring a stable systems landscape. Given that circuit breakers are in place,
they can be monitored for status and to collect runtime statistics to assess the health of the system
landscape and its current usage. This information can be collected and displayed on dashboards with
possibilities for setting up automatic alarms against configurable thresholds.
From an operational standpoint, with numerous services deployed as independent processes
communicating with each other over a network, components are required for tracing the service call
chain across processes and hosts to precisely identify individual service performance.
To protect the exposed API services the OAuth 2.0 standard is recommended.
Vert.x
Support
Vert.x
Support
Vert.x
Support
Vert.x
Support
Reference
Implementation
Reference
Implementation
Reference
Implementation
Reference
Implementation
Vert.x provides a circuit breaker component out of the box which helps avoid failure cascading.
It also lets the application react and recover from failure states. The circuit breaker can be
configured with a timeout, fallback on failure behavior and the maximum number of failure retries.
This ensures that if a service goes down, the failure is handled in a predefined manner. Vert.x also
supports automatic fail-over. If one instance dies, a back-up instance can automatically deploy
and starts the modules deployed by the first instance. Vert.x optionally supports HA group and
network partitions.
Vert.x supports runtime metric collection (e.g., Dropwizard,19
Hawkular,20
etc.) of various core
components and exposes these metrics as JMX or as events in Event Bus. Vert.x can be integrated
with software like the ELK stack for centralized log analysis.
Vert.x can be integrated with software like ZipKin for call tracing.
Vert.x supports OAuth 2.0, Shiro and JWT Auth, as well as authentication implementation backed
by JDBC and MongoDB.
In our implementation, we have encapsulated the core service calls from the composite service
through a circuit breaker pattern to avoid cascading failures.
Our custom user interface implementation provides near-real-time monitoring data to the
administrator by leveraging web sockets. Monitoring console features include:
•	 Dynamic list showing the cluster nodes availability status.
•	 Dynamic list showing all the deployed services and their status.
•	 Memory and CPU utilization information from each node.
•	 All the circuit breaker states currently active in the cluster.
We used an Elastic Search, Logstash and Kibana (ELK) stack for centralized log analysis.
We have used ZipKin for call tracing.
We have leveraged Vert.x OAuth 2.0 security module to protect our services exposed by the edge
server.
Figure 2
(Continued from previous page)
Cognizant 20-20 Insights
7High Performance Reactive Microservices Architecture |
IMPLEMENTATION: A USE CASE
The use case we considered for our reference
implementation involves an externally consum-
able microservice which, when invoked, calls
five independent microservices, composes the
response from these services and returns the
composite response to the caller. The compos-
ite service – FlightInfoService – is meant to be
consumed by web portals and mobile apps and
is designed to fetch information pertaining to a
particular flight, including the flight schedule,
current status, information on the airline, the
specific aircraft, and weather information at the
origin and destination airports.
Our Approach
•	 Setup: Composite and core services, including
all operational governance components (for
request routing, discovery, call tracing, configu-
ration management, centralized log monitoring
and security), were deployed in separate dedi-
cated virtual machines as single Java processes.
•	 Process: Apache JMeter was used to invoke
the FlightInfoService composite service. The
three scenarios (Spring Cloud Netflix-Synch,
Spring Cloud Netflix Asynch and Vert.x) were
executed serially in the same environment
with equivalent components running on the
same virtual machines.
•	 Configuration settings: Composite and core ser-
vices including all governance components were
executed with their default configuration settings.
Our Technology Stack
We implemented this microservice ecosystem
using two technology stacks, with each adopting
different message exchange patterns.
•	 We chose Spring Cloud Netflix for implement-
ing the scenario where the composite service
invokesthecoreservicessynchronously.Spring
Cloud Netflix provides Netflix OSS integrations
for Spring Boot apps through auto-configu-
ration and binding to the Spring Environment
and other Spring programming model idioms.
The stack includes implementations for service
discovery (Eureka), circuit breaker (Hystrix),
intelligent routing (Zuul) and client-side load bal-
ancing (Ribbon) (see Figure 3).
DISCOVERY
Eureka
CENTRALIZED CONFIGURATION
Spring Cloud Config
LOG MONITORING
ELK
CALL TRACING
Zipkin
SECURITY
OAuth 2.0
SCHEDULE (S1)
STATUS (S2)
AIRCRAFT (S3)
AIRPORT (S4)
AIRLINE (S5)
MC
MC
MC
MC
EDGESERVER
Zuul
COMPOSITE CORE
MC
MC
FLIGHT INFO (C1)
LOAD BALANCER /
CIRCUIT BREAKER
Ribbon / Hystrix
Spring Cloud Netflix Implementation
Figure 3
Cognizant 20-20 Insights
8High Performance Reactive Microservices Architecture |
•	 We used Vert.x for implementing the reactive
and asynchronous message exchange pattern
between composite and core services (see
Figure 4). The microservices were implemented
as Vert.x Verticles. Details of how we imple-
mented the microservices are discussed above.
Reactive microservice architecture implementa-
tion using Vert.x has the following characteristics,
some of which are positively desirable for enter-
prise-scale microservices adoption:
•	 All components in the Vert.x microservice eco-
system (including the core and composite
microservices and the different operational com-
ponents) communicate with each other through
the Event Bus implemented using a Hazelcast
cluster. Internal service communication (i.e.,
between composite and core services) uses Async
RPC, resulting in significant throughput improve-
ment over synch HTTP, which is the default in
Spring Cloud Netflix (see Figure 5, next page).
•	 Vert.x supports polyglot programming, imply-
ing that individual microservices can be
written in different languages including Java,
JavaScript, Groovy, Ruby and Ceylon.
•	 Vert.x provides automatic fail-over and load
balancing out of the box.
•	 Vert.x is a toolkit enabling many components
of a typical microservices reference archi-
tecture to be built. The microservices are
deployed as Verticles.
GAUGING THE TEST RESULTS
Figure5showsthethroughputcomparisonbetween
Spring Cloud Netflix (Sync and Async-REST) (as per
Figure 3) and the Vert.x implementation (as per
Figure 4) in our lab environment.21
Observations
•	 Spring Cloud Netflix–Sync implementation:
We observed that the number of requests pro-
cessed per second was the lowest of the three
scenarios. This can be explained by the fact
that the server request thread pools were get-
DISCOVERY
Vert.x
CENTRALIZED CONFIGURATION
Hazelcast
LOG MONITORING
ELK
CALL TRACING
Zipkin
SECURITY
OAuth 2.0
MC
MC
MC
MC
EDGESERVER
Vert.xWeb
COMPOSITE CORE
MC
MC
SCHEDULE (S1)
STATUS (S2)
AIRCRAFT (S3)
AIRPORT (S4)
AIRLINE (S5)
S1
S2
S3
S4
S5
Event Bus
Hazelcast
Event Bus
Hazelcast
Composite Service Address
FLIGHT-INFO
CIRCUIT BREAKER
Vert.x
C0
Cn CoreServiceAddressSn MicroservicesContainerMC
Our Vert.x Implementation
Figure 4
Cognizant 20-20 Insights
9High Performance Reactive Microservices Architecture |
ting exhausted as the composite to core service
invocations were synchronous and blocking.
The processor and memory utilization were
not significant. The core and composite ser-
vices were blocking in I/O operations.
•	 Spring Cloud Netflix–Async-REST imple-
mentation: The Asynch API of the Spring
Framework and RxJava help to increase the
maximum requests processed per second.
This is because the composite and core
service implementations used the Spring
Framework’s alternative approach to asyn-
chronous request processing.
•	 Vert.x implementation: We observed that
the number of requests processed per second
was the highest of the three scenarios. This
is primarily due to Vert.x’s nonblocking and
event-driven nature. Vert.x uses an actor-
based concurrency model inspired by Akka.
Vert.x calls this the multi-reactor pattern
which uses all available cores on the virtual
machines and uses several event loops. The
edge server is based on the Vert.x Web module
which is also nonblocking. Composite to core
service invocations use Async RPC instead of
traditional synchronous HTTP, which results
in high throughput. Even Vert.x JDBC clients
use asynchronous API to interact with data-
bases which are faster compared to blocking
JDBC calls. Overall, Vert.x scales better with
the increase of virtual users.
LOOKING FORWARD
Reactive principles are not new. They have been
used, proven and hardened over many years. Reac-
tive microservices effectively leverages these ideas
with modern software and hardware platforms to
deliver loosely-coupled, single-responsibility and
autonomous microservices using asynchronous
message passing for interservices communication.
Our research and use case implementation
highlight the feasibility of adopting reactive
microservices as a viable alternative to imple-
ment a microservices ecosystem for enterprise
applications, with Vert.x emerging as a compelling
reactive toolkit. Our work underscored the greater
operational efficiencies that can be obtained from
reactive microservices.
0
0
100 200 300
VIRTUAL USERS
Throughput(requests/sec)
Spring Cloud Netflix vs. Vert.x
400 500 600
1200
1000
800
600
400
200
Spring Cloud (Sync) Spring Cloud (Async-REST) Vertx
Throughput Comparison
Figure 5
Cognizant 20-20 Insights
10High Performance Reactive Microservices Architecture |
FOOTNOTES
1	 Microservices, http://www.martinfowler.com/articles/microservices.html
2	 Ibid.
3	 The Reactive Manifesto, http://www.reactivemanifesto.org/
4	 Node.js, https://nodejs.org/en/
5	 QBit, https://github.com/advantageous/qbit
6	 Spring Cloud Netflix, https://cloud.spring.io/spring-cloud-netflix/
7	 Vert.x, http://vertx.io/
8	 Erlang, https://www.erlang.org/
9	 Akka, http://akka.io/
10	 Hazelcast, https://hazelcast.org/
11	 JGroups, http://jgroups.org/
12	 Kubernetes, https://kubernetes.io/
13	 Docker Links, https://docs.docker.com/docker-cloud/apps/service-links/
14	 Consul, https://www.consul.io/
15	 Redis, https://redis.io/
16	 Hazelcast Map, http://docs.hazelcast.org/docs/3.5/manual/html/map.html
17	 Netflix Archaius, https://github.com/Netflix/archaius
18	 Hazelcast Distributed Map, http://docs.hazelcast.org/docs/3.5/manual/html/map.html
19	 Dropwizard, http://www.dropwizard.io/1.1.0/docs/
20	 Hawkular, https://www.hawkular.org/
21	 Load Generation Tool: JMeter (Single VM) [8 Core, 8 GB RAM]; Core Services: Each core service is deployed in a separate VM
with 4 Core and 4 GB RAM; Composite Service: Deployed in a separate VM with 4 core and 16 GB RAM; Edge Server: Deployed
in a separate VM with 4 core and 16 GB RAM; Operational Components: Deployed in separate VM with 4 core and 16 GB RAM.
Cognizant 20-20 Insights
11High Performance Reactive Microservices Architecture |
Dipanjan Sengupta
Chief Architect, Software
Engineering and Architecture
Lab
Dipanjan Sengupta is a Chief Architect in the Software Engineering
and Architecture Lab of Cognizant’s Global Technology Office. He
has extensive experience in service-oriented integration, integra-
tion of cloud-based and on-premises applications, API management
and microservices-based architecture. Dipanjan has a post-graduate
degree in engineering from IIT Kanpur. He can be reached at Dipanjan.
Sengputa@cognizant.com.
Hitesh Bagchi
Principal Architect, Software
Engineering and Architecture
Lab
Hitesh Bagchi is a Principal Architect in the Software Engineering
and Architecture Lab of Cognizant’s Global Technology Office. He has
extensive experience in service-oriented architecture, API manage-
ment and microservices-based architecture, distributed application
development, stream computing, cloud computing and big data.
Hitesh has a B.Tech. in engineering from University of Calcutta. He can
be reached at Hitesh.Bagchi@cognizant.com.
Satyajit Prasad
Chainy
Senior Architect, Software
Engineering and Architecture
Lab
Satyajit Prasad Chainy is a Senior Architect in the Software Engineer-
ing and Architecture Lab of Cognizant’s Global Technology Office. He
has extensive experience with J2EE related technologies. Satyajit has
a B.E. in computer science from University of Amaravati. He can be
reached at Satyajitprasad.Chainy@cognizant.com.
Pijush Kanti Giri
Architect, Software
Engineering and Architecture
Lab
Pijush Kanti Giri is an Architect in the Software Engineering and Archi-
tecture Lab of Cognizant’s Global Technology Office. He has extensive
experience in Eclipse plugin architecture and developing Eclipse RCP
applications, API management and microservices-based architecture.
Pijush has a B.Tech. in computer science from University of Kalyani. He
can be reached at Pijush-Kanti.Giri@cognizant.com.
ABOUT THE AUTHORS
World Headquarters
500 Frank W. Burr Blvd.
Teaneck, NJ 07666 USA
Phone: +1 201 801 0233
Fax: +1 201 801 0243
Toll Free: +1 888 937 3277
European Headquarters
1 Kingdom Street
Paddington Central
London W2 6BD England
Phone: +44 (0) 20 7297 7600
Fax: +44 (0) 20 7121 0102
India Operations Headquarters
#5/535 Old Mahabalipuram Road
Okkiyam Pettai, Thoraipakkam
Chennai, 600 096 India
Phone: +91 (0) 44 4209 6000
Fax: +91 (0) 44 4209 6060
© Copyright 2017, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any means,electronic, mechanical,
photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein is subject to change without notice. All other trademarks
mentioned herein are the property of their respective owners.
TL Codex 2654
ABOUT COGNIZANT
Cognizant (NASDAQ-100: CTSH) is one of the world’s leading professional services companies, transforming clients’ business, operating and
technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innova-
tive and efficient businesses. Headquartered in the U.S., Cognizant is ranked 205 on the Fortune 500 and is consistently listed among the
most admired companies in the world. Learn how Cognizant helps clients lead with digital at www.cognizant.com or follow us @Cognizant.

Más contenido relacionado

La actualidad más candente

Cloud Native Computing: What does it mean, and is your app Cloud Native?
Cloud Native Computing: What does it mean, and is your app Cloud Native?Cloud Native Computing: What does it mean, and is your app Cloud Native?
Cloud Native Computing: What does it mean, and is your app Cloud Native?
Michael O'Sullivan
 
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
Kai Wähner
 
Microservice Scars - Alt.net 2hr
Microservice Scars - Alt.net 2hrMicroservice Scars - Alt.net 2hr
Microservice Scars - Alt.net 2hr
Joshua Toth
 

La actualidad más candente (20)

Microservices Architecture: Building 'SMART' & 'Agile' Software
Microservices Architecture: Building 'SMART' & 'Agile' SoftwareMicroservices Architecture: Building 'SMART' & 'Agile' Software
Microservices Architecture: Building 'SMART' & 'Agile' Software
 
Microservice Architecture | Microservices Tutorial for Beginners | Microservi...
Microservice Architecture | Microservices Tutorial for Beginners | Microservi...Microservice Architecture | Microservices Tutorial for Beginners | Microservi...
Microservice Architecture | Microservices Tutorial for Beginners | Microservi...
 
Goto Berlin - Migrating to Microservices (Fast Delivery)
Goto Berlin - Migrating to Microservices (Fast Delivery)Goto Berlin - Migrating to Microservices (Fast Delivery)
Goto Berlin - Migrating to Microservices (Fast Delivery)
 
Microservices for Mortals
Microservices for MortalsMicroservices for Mortals
Microservices for Mortals
 
Cloud Native Computing: What does it mean, and is your app Cloud Native?
Cloud Native Computing: What does it mean, and is your app Cloud Native?Cloud Native Computing: What does it mean, and is your app Cloud Native?
Cloud Native Computing: What does it mean, and is your app Cloud Native?
 
Microservices: Decomposing Applications for Deployability and Scalability (ja...
Microservices: Decomposing Applications for Deployability and Scalability (ja...Microservices: Decomposing Applications for Deployability and Scalability (ja...
Microservices: Decomposing Applications for Deployability and Scalability (ja...
 
Delivering Essentials for Albertsons: VMware TAS’s Critical Role During the C...
Delivering Essentials for Albertsons: VMware TAS’s Critical Role During the C...Delivering Essentials for Albertsons: VMware TAS’s Critical Role During the C...
Delivering Essentials for Albertsons: VMware TAS’s Critical Role During the C...
 
Why Microservice
Why Microservice Why Microservice
Why Microservice
 
Building Cloud Native Applications
Building Cloud Native Applications Building Cloud Native Applications
Building Cloud Native Applications
 
Developing applications with a microservice architecture (SVforum, microservi...
Developing applications with a microservice architecture (SVforum, microservi...Developing applications with a microservice architecture (SVforum, microservi...
Developing applications with a microservice architecture (SVforum, microservi...
 
Microservices Architecture (MSA) - Presentation made at AEA-MN quarterly even...
Microservices Architecture (MSA) - Presentation made at AEA-MN quarterly even...Microservices Architecture (MSA) - Presentation made at AEA-MN quarterly even...
Microservices Architecture (MSA) - Presentation made at AEA-MN quarterly even...
 
Azure Spring Cloud Workshop - June 17, 2020
Azure Spring Cloud Workshop - June 17, 2020Azure Spring Cloud Workshop - June 17, 2020
Azure Spring Cloud Workshop - June 17, 2020
 
micro services architecture (FrosCon2014)
micro services architecture (FrosCon2014)micro services architecture (FrosCon2014)
micro services architecture (FrosCon2014)
 
From Monolithic applications to Microservices
From Monolithic applications to MicroservicesFrom Monolithic applications to Microservices
From Monolithic applications to Microservices
 
Introduction to Microservices
Introduction to MicroservicesIntroduction to Microservices
Introduction to Microservices
 
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
Microservices, Containers, Docker and a Cloud-Native Architecture in the Midd...
 
Microservice Scars - Alt.net 2hr
Microservice Scars - Alt.net 2hrMicroservice Scars - Alt.net 2hr
Microservice Scars - Alt.net 2hr
 
MicroServices, yet another architectural style?
MicroServices, yet another architectural style?MicroServices, yet another architectural style?
MicroServices, yet another architectural style?
 
Patterns of Cloud Native Architecture
Patterns of Cloud Native ArchitecturePatterns of Cloud Native Architecture
Patterns of Cloud Native Architecture
 
Building Cloud Native Architectures with Spring
Building Cloud Native Architectures with SpringBuilding Cloud Native Architectures with Spring
Building Cloud Native Architectures with Spring
 

Similar a Building a High-Performance Reactive Microservices Architecture

Similar a Building a High-Performance Reactive Microservices Architecture (20)

SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
 
SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHYSELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
SELECTION MECHANISM OF MICRO-SERVICES ORCHESTRATION VS. CHOREOGRAPHY
 
Microservices.pptx
Microservices.pptxMicroservices.pptx
Microservices.pptx
 
Basics of Java Microservices: Frameworks, Examples & Use Cases
Basics of Java Microservices: Frameworks, Examples & Use CasesBasics of Java Microservices: Frameworks, Examples & Use Cases
Basics of Java Microservices: Frameworks, Examples & Use Cases
 
05 microservices microdeck
05 microservices microdeck05 microservices microdeck
05 microservices microdeck
 
Architecting for speed: how agile innovators accelerate growth through micros...
Architecting for speed: how agile innovators accelerate growth through micros...Architecting for speed: how agile innovators accelerate growth through micros...
Architecting for speed: how agile innovators accelerate growth through micros...
 
Architecting for speed: how agile innovators accelerate growth through micros...
Architecting for speed: how agile innovators accelerate growth through micros...Architecting for speed: how agile innovators accelerate growth through micros...
Architecting for speed: how agile innovators accelerate growth through micros...
 
A Guide on What Are Microservices: Pros, Cons, Use Cases, and More
A Guide on What Are Microservices: Pros, Cons, Use Cases, and MoreA Guide on What Are Microservices: Pros, Cons, Use Cases, and More
A Guide on What Are Microservices: Pros, Cons, Use Cases, and More
 
Microservices Design Principles.pdf
Microservices Design Principles.pdfMicroservices Design Principles.pdf
Microservices Design Principles.pdf
 
Meetup6 microservices for the IoT
Meetup6 microservices for the IoTMeetup6 microservices for the IoT
Meetup6 microservices for the IoT
 
Micro services vs Monolith Architecture
Micro services vs Monolith ArchitectureMicro services vs Monolith Architecture
Micro services vs Monolith Architecture
 
integeration
integerationintegeration
integeration
 
AppDev with Microservices
AppDev with MicroservicesAppDev with Microservices
AppDev with Microservices
 
Introduction to Microservices
Introduction to MicroservicesIntroduction to Microservices
Introduction to Microservices
 
Microservice Architecture Software Architecture Microservice Design Pattern
Microservice Architecture Software Architecture Microservice Design PatternMicroservice Architecture Software Architecture Microservice Design Pattern
Microservice Architecture Software Architecture Microservice Design Pattern
 
Top 5 Software Architecture Pattern Event Driven SOA Microservice Serverless ...
Top 5 Software Architecture Pattern Event Driven SOA Microservice Serverless ...Top 5 Software Architecture Pattern Event Driven SOA Microservice Serverless ...
Top 5 Software Architecture Pattern Event Driven SOA Microservice Serverless ...
 
Microservices architecture
Microservices architectureMicroservices architecture
Microservices architecture
 
Microservices
MicroservicesMicroservices
Microservices
 
Microservices with mule
Microservices with muleMicroservices with mule
Microservices with mule
 
Overcoming Ongoing Digital Transformational Challenges with a Microservices A...
Overcoming Ongoing Digital Transformational Challenges with a Microservices A...Overcoming Ongoing Digital Transformational Challenges with a Microservices A...
Overcoming Ongoing Digital Transformational Challenges with a Microservices A...
 

Más de Cognizant

Más de Cognizant (20)

Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
 
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-makingData Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
 
It Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
It Takes an Ecosystem: How Technology Companies Deliver Exceptional ExperiencesIt Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
It Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
 
Intuition Engineered
Intuition EngineeredIntuition Engineered
Intuition Engineered
 
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
 
Enhancing Desirability: Five Considerations for Winning Digital Initiatives
Enhancing Desirability: Five Considerations for Winning Digital InitiativesEnhancing Desirability: Five Considerations for Winning Digital Initiatives
Enhancing Desirability: Five Considerations for Winning Digital Initiatives
 
The Work Ahead in Manufacturing: Fulfilling the Agility Mandate
The Work Ahead in Manufacturing: Fulfilling the Agility MandateThe Work Ahead in Manufacturing: Fulfilling the Agility Mandate
The Work Ahead in Manufacturing: Fulfilling the Agility Mandate
 
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
 
Engineering the Next-Gen Digital Claims Organisation for Australian General I...
Engineering the Next-Gen Digital Claims Organisation for Australian General I...Engineering the Next-Gen Digital Claims Organisation for Australian General I...
Engineering the Next-Gen Digital Claims Organisation for Australian General I...
 
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
 
Green Rush: The Economic Imperative for Sustainability
Green Rush: The Economic Imperative for SustainabilityGreen Rush: The Economic Imperative for Sustainability
Green Rush: The Economic Imperative for Sustainability
 
Policy Administration Modernization: Four Paths for Insurers
Policy Administration Modernization: Four Paths for InsurersPolicy Administration Modernization: Four Paths for Insurers
Policy Administration Modernization: Four Paths for Insurers
 
The Work Ahead in Utilities: Powering a Sustainable Future with Digital
The Work Ahead in Utilities: Powering a Sustainable Future with DigitalThe Work Ahead in Utilities: Powering a Sustainable Future with Digital
The Work Ahead in Utilities: Powering a Sustainable Future with Digital
 
AI in Media & Entertainment: Starting the Journey to Value
AI in Media & Entertainment: Starting the Journey to ValueAI in Media & Entertainment: Starting the Journey to Value
AI in Media & Entertainment: Starting the Journey to Value
 
Operations Workforce Management: A Data-Informed, Digital-First Approach
Operations Workforce Management: A Data-Informed, Digital-First ApproachOperations Workforce Management: A Data-Informed, Digital-First Approach
Operations Workforce Management: A Data-Informed, Digital-First Approach
 
Five Priorities for Quality Engineering When Taking Banking to the Cloud
Five Priorities for Quality Engineering When Taking Banking to the CloudFive Priorities for Quality Engineering When Taking Banking to the Cloud
Five Priorities for Quality Engineering When Taking Banking to the Cloud
 
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining FocusedGetting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
 
Crafting the Utility of the Future
Crafting the Utility of the FutureCrafting the Utility of the Future
Crafting the Utility of the Future
 
Utilities Can Ramp Up CX with a Customer Data Platform
Utilities Can Ramp Up CX with a Customer Data PlatformUtilities Can Ramp Up CX with a Customer Data Platform
Utilities Can Ramp Up CX with a Customer Data Platform
 
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
 

Building a High-Performance Reactive Microservices Architecture

  • 1. Cognizant 20-20 Insights | July 2017 Building a High Performance Reactive Microservices Architecture As digital rapidly reshapes every aspect of business, IT organizations need to adopt new tools, techniques and methodologies. Reactive microservices are fast emerging as a compelling viable alternative. Executive Summary Growing demand for real-time data from mobile and connected devices is generating massive pro- cessing loads on enterprise systems, increasing both infrastructure and maintenance costs. Digital business is overturning conventional operating models, requiring IT organizations to quickly mod- ernize their application architectures and update infrastructure strategies to meet ever-growing and ever-changing business demands. To meet these elevated demands, the logical step is to re-architect applications from collaborating components in a monolithic setting to discreet and modular services that interact remotely with one another. This has led to the emergence and adoption of microservices architectures. Microservices-based application architectures are composed of small, independently versioned and scalable, functionally-focused services that communicate with each other using standard protocols with well-defined interfaces. Blocking synchronous interaction between microservices, which is today’s standard approach, may not be the optimum option for microservices invocation. COGNIZANT 20-20 INSIGHTS
  • 2. 2High Performance Reactive Microservices Architecture | One of the key characteristics of the microservices architecture is technology diversity – and support for it. This extends to the programming languages in which individual services are written, including their approach toward implementing the capa- bilities embodied by microservices components within the reference architecture and the commu- nication pattern between these components. Systems or applications built on reactive principles are flexible, loosely-coupled, reliable and scalable. They are tolerant to failures and even when fail- ures occur, they respond elegantly rather than with a catastrophic crash. Reactive systems are highly responsive, offering users a great user experience. We believe that implementation of microservices architecture using reactive applications is a viable approach and offers unique possibilities. This white paper discusses aspects of reactive microservices architecture implementation and offers insights into how it helps to mitigate the growing demand for building scalable and resilient systems. It also illustrates a typical use case for microservices implementation, compares the performance of nonreactive and reactive implementation alternatives and, finally, concludes with a comparisonthatrevealshowreactivemicroservices perform better than nonreactive microservices. DEFINING MICROSERVICES ARCHITECTURE Microservices architecture is an architectural style for developing software applications as a suite of small, autonomous services – optionally deployed on different tech stacks, and written in different programming languages – that work together run- ning in its own specific process. The architecture communicates using a lightweight communication protocol, usually either HTTP request-response with resource APIs or light-weight messaging. Microservices architectural styles1 are best under- stoodincomparisontothemonolithicarchitectural style, an approach to application development where an entire application is deployed and scaled as a single deployable unit. In monoliths, business logics are packaged in a single bundle and they are run as a single process. These applications are scaled by running multiple instances horizontally.2 Reactive Systems Reactive systems are flexible, loosely-coupled and scalable. This makes them easier to develop and amenable to change. They are significantly more tolerant of failures and, as mentioned above, they respond elegantly even when fail- ures happen, rather than catastrophically failing. Reactive systems are highly responsive, offering users a great user experience. The Reactive Manifesto3 outlines characteristics of reactive systems based on four core principles: • Responsive: Responds in a timely manner, provides rapid and consistent response, estab- lishes reliable upper bounds and delivers a consistent quality of service. 1 Reactive systems are flexible, loosely-coupled and scalable. This makes them easier to develop and amenable to change.
  • 3. 3High Performance Reactive Microservices Architecture | • Resilient: Is responsive in the face of failure. Resilience is achieved though replication, con- tainment, isolation and delegation. • Elastic: Is responsive under varying work- loads and can react to changes in request load by increasing or decreasing resources allocated to service these requests. • Message-driven: Relies on asynchronous message-passing to establish a boundary between components that ensures loose cou- pling, isolation and location transparency. Reactive Microservices Implementing microservices with reactive prin- ciples offers unique possibilities whereby each component of the microservices reference architecture, including the microservices com- ponents, are individually developed, released, deployed, scaled, updated and retired. It also infuses the required resilience into the system to avoid failure cascading, which keeps the system responsive even when failures occur. Moreover, the asynchronous communication facilitated by reactive systems copes with the interaction challenges as well as the load varia- tions of modern systems. Reactive microservices can be implemented using frameworks such as Node.js,4 Qbit,5 Spring Cloud Netflix,6 etc. But we propose Vert.x7 as the foundational toolkit for building reactive micro- services. IMPLEMENTING REACTIVE MICROSERVICES ARCHITECTURE VIA VERT.X Vert.x is a toolkit to build distributed and reac- tive applications on the top of the Java Virtual Machine using an asynchronous nonblocking development model. Vert.x is designed to imple- ment an Event Bus – a lightweight message broker – which enables different parts of the application to communicate in a nonblocking and thread-safe manner. Parts of it were modeled after the Actor methodology exhibited by Erlang8 and Akka.9 Vert.x is also designed to take full advantage of today’s multi-core processors and highly concur- rent programming demands. As a toolkit, Vert.x can be used in many contexts – as a stand-alone application or embedded in another application. Figure 1 (next page) details the key features of Vert.x, revealing why it is ide- ally suited for microservices architecture. Vert.x for Microservices Architecture Implementation Vert.x offers various component to build reactive microservices-based applications. Getting there requires understanding of a few key Vert.x constructs: • A Vert.x Verticle is a logical unit of code that can be deployed on a Vert.x instance. It is com- parable to actors in the Actor Model. A typical Vert.x application will be composed of many Cognizant 20-20 Insights
  • 4. Cognizant 20-20 Insights 4High Performance Reactive Microservices Architecture | Verticle instances in each Vert.x instance. The Verticles communicate with each other by sending messages over the Event Bus. • A Vert.x Event Bus is a lightweight distributed messaging system that allows different parts of applications, or different applications and services (written in different languages) to communicate with each other in a loosely coupled way. The Event Bus supports publish-subscribe messaging, point-to-point messaging and request-response messaging. Verticles can send and listen to addresses on the Event Bus. An address is similar to a named channel. When a message is sent to a given address, all Verticles that listen on that address receive the message. Figure 2 summarizes the key microservices requirements, the features provided by Vert.x to implement these requirements, and how they fit into our reference implementation. Vert.x Key Features Mapping Microservices Requirements to Vert.x Feature & Fitment with Our Reference Architecture FEATURE DESCRIPTION Lightweight Vert.x core is small (around 650kB in size) and lightweight in terms of distribution and runtime footprint. It can be entirely embeddable in existing applications. Scalable Vert.x can scale both horizontally and vertically. Vert.x can form clusters through Hazelcast10 or JGroups.11 Vert.x is capable of using all the CPUs of the machine, or cores in the CPU. Polyglot Vert.x can execute Java, JavaScript, Ruby, Python and Groovy. Vert.x components can seamlessly talk to each other through an event bus written in different languages. Fast, Event-Driven and Non-blocking None of the Vert.x APIs block threads; hence an application using Vert.x can handle a great deal of concurrency via a small number of threads. Vert.x provides specialized worker threads to handle blocking calls. Modular Vert.x runtime is divided into multiple components. The only components that can be used are those required and applicable for a particular implementation. Unopinionated Vert.x is not a restrictive framework or container and it does not advocate a correct way to write an application.Instead,Vert.xprovidesdifferentmodulesandletsdeveloperscreatetheirownapplications. SERVICE COMMUNICATION Required Feature Microservices interact with other microservices and exchange data with various operational and governance components in the ecosystem. Externally consumable services need to expose an API for consumption by authorized consumers. For externally consumable microservices, an edge server is required that all external traffic must pass through. The edge server can reuse the dynamic routing and load balancing capabilities based on the service discovery component described above. Internal communication between microservices and operational components or even between the microservices themselves may be over synch or asynchronous message exchange patterns. Vert.x Support Reference Implementation Vert.x-Web may be used to implement server-side web applications, RESTful HTTP microservices, real-time (server push) web applications, etc. These features can be leveraged to create an edge server without introducing an external component to fulfill edge server requirements. Vert.x supports Async RPC on the (clustered) Event Bus. Since Vert.x can expose a service on the Event Bus through an address, the service methods can be called through Async RPC. We have used the Vert.x-Web module to write our own edge server. The edge server may addition- ally implement API gateway features for API management and governance. We have leveraged the Async RPC feature to implement inter-service communication and for data exchange between microservices and operational components. Figure 1 Figure 2 (continued on following page)
  • 5. Cognizant 20-20 Insights 5High Performance Reactive Microservices Architecture | SERVICE SELF-REGISTRATION AND SERVICE DISCOVERY LOCATION TRANSPARENCY LOAD BALANCING CENTRALIZED CONFIGURATION MANAGEMENT Required Feature Required Feature Required Feature Required Feature Tracking host and ports data of services with a fewer number of services is simple due to the lower services count and consequently lower change rate. However, a large number of modular microservices deployed independently of each other is a significantly more complex system landscape as these services will come and go on a continuous basis. Such rapid microservices configuration changes are hard to manage manually. Instead of manually tracking the deployed microservices and their hosts and ports information, we need service discovery functionality that enables microservices, through its API, to self-register to a central service registry on start-up. Every microservices uses the registry to discover dependent services. In a dynamic system landscape, where new services are added, new instances of existing services are provisioned and existing services are decommissioned or deprecated at a rapid rate. Service consumers need to be constantly aware of these deployment changes and service routing information which determines the physical location of a microservice at any given point in time. Manually updating the consumers with this information is time-consuming and error-prone. Given a service discovery function, routing components can use the discovery API to look up where the requested microservice is deployed and load balancing components can decide on which instance to route the request to if multiple instances are deployed for the requested service. Given the service discovery function, and assuming that multiple instances of microservices are concurrently running, routing components can use the discovery API to look up where the requested microservices are deployed and load balancing components can decide which instance of the micro- service to route the request to. With large numbers of microservices deployed on multiple instances on multiple servers, application- specific configuration becomes difficult to manage. Instead of a local configuration per deployed unit (i.e., microservice), centralized management of configuration is desirable from an operational standpoint. Vert.x Support Vert.x Support Vert.x Support Vert.x Support Reference Implementation Reference Implementation Reference Implementation Reference Implementation Vert.x provides a service discovery component to publish and discover various resources. Vert.x provides support for several service types including Event Bus services (service proxies), HTTP endpoints, message sources and data sources. Vert.x also supports service discovery through Kubernetes,12 Docker Links,13 Consul14 and Redis15 back-end. Vert.x exposes services on the Event Bus through an address. This feature provides location transparency. Any client systems with access to the Event Bus can call the service by name and be routed to the appropriate available instance. Multiple instances of the same service deployed in a Vert.x cluster refer to the same address. Calls to the service therefore are automatically distributed among the instances. This is a powerful feature that provides built-in load balancing for service calls. Vert.x does not ship with a centralized configuration management server, like Netflix Archaius.17 We can leverage distributed maps to keep centralized configuration information for the different microservices. Centralized information updates can be propagated to the consumer applications as Event Bus messages. During start-up, microservices register themselves in a service registry that we have implemented using a distributed Hazelcast Map,16 accessible through the Service Discovery component. In order to get high throughput, our implementation includes a client side caching for service proxies, thereby minimizing the latency added due to service discovery for each client-side request. We have exposed our microservices as Event Bus addresses whose methods can be called using Asynchronous RPC. This has a positive impact on the performance during inter-microservice communications as compared to synchronous HTTP/S calls between microservices. We have exposed our microservices as Event Bus addresses whose methods can be called using Asynchronous RPC. Load balancing is automatically achieved when multiple instances of the same service are concurrently running on different instances of the cluster. We have leveraged Hazelcast Distributed Map18 to maintain the microservice application-related property and parameter values at one place. Any update to the map sends update events to applicable clients through the Event Bus. (Continued from previous page) Figure 2 (continued on following page)
  • 6. Cognizant 20-20 Insights 6High Performance Reactive Microservices Architecture | HIGH AVAILABILITY, FAIL-OVER AND FAILURE MANAGEMENT MONITORING AND MANAGEMENT CALL TRACING SERVICE SECURITY Required Feature Required Feature Required Feature Required Feature Since the microservices are interconnected with each other, chains of failure in the system landscape must be avoided. If a microservice that a number of other microservices depends on fails, the dependent microservices may also start to fail, and so on. If not handled properly, large parts of the system landscape may be affected by a single failing microservice, resulting in a fragile system landscape. An appropriate monitoring and management tool kit is needed to keep track of the state of the microservice applications and the nodes on which they are deployed. With a large number of microservices, there are more potential failure points. Centralized analysis of the logs of the individual microservices, health monitoring of the services and the virtual machines on which they are hosted are all key to ensuring a stable systems landscape. Given that circuit breakers are in place, they can be monitored for status and to collect runtime statistics to assess the health of the system landscape and its current usage. This information can be collected and displayed on dashboards with possibilities for setting up automatic alarms against configurable thresholds. From an operational standpoint, with numerous services deployed as independent processes communicating with each other over a network, components are required for tracing the service call chain across processes and hosts to precisely identify individual service performance. To protect the exposed API services the OAuth 2.0 standard is recommended. Vert.x Support Vert.x Support Vert.x Support Vert.x Support Reference Implementation Reference Implementation Reference Implementation Reference Implementation Vert.x provides a circuit breaker component out of the box which helps avoid failure cascading. It also lets the application react and recover from failure states. The circuit breaker can be configured with a timeout, fallback on failure behavior and the maximum number of failure retries. This ensures that if a service goes down, the failure is handled in a predefined manner. Vert.x also supports automatic fail-over. If one instance dies, a back-up instance can automatically deploy and starts the modules deployed by the first instance. Vert.x optionally supports HA group and network partitions. Vert.x supports runtime metric collection (e.g., Dropwizard,19 Hawkular,20 etc.) of various core components and exposes these metrics as JMX or as events in Event Bus. Vert.x can be integrated with software like the ELK stack for centralized log analysis. Vert.x can be integrated with software like ZipKin for call tracing. Vert.x supports OAuth 2.0, Shiro and JWT Auth, as well as authentication implementation backed by JDBC and MongoDB. In our implementation, we have encapsulated the core service calls from the composite service through a circuit breaker pattern to avoid cascading failures. Our custom user interface implementation provides near-real-time monitoring data to the administrator by leveraging web sockets. Monitoring console features include: • Dynamic list showing the cluster nodes availability status. • Dynamic list showing all the deployed services and their status. • Memory and CPU utilization information from each node. • All the circuit breaker states currently active in the cluster. We used an Elastic Search, Logstash and Kibana (ELK) stack for centralized log analysis. We have used ZipKin for call tracing. We have leveraged Vert.x OAuth 2.0 security module to protect our services exposed by the edge server. Figure 2 (Continued from previous page)
  • 7. Cognizant 20-20 Insights 7High Performance Reactive Microservices Architecture | IMPLEMENTATION: A USE CASE The use case we considered for our reference implementation involves an externally consum- able microservice which, when invoked, calls five independent microservices, composes the response from these services and returns the composite response to the caller. The compos- ite service – FlightInfoService – is meant to be consumed by web portals and mobile apps and is designed to fetch information pertaining to a particular flight, including the flight schedule, current status, information on the airline, the specific aircraft, and weather information at the origin and destination airports. Our Approach • Setup: Composite and core services, including all operational governance components (for request routing, discovery, call tracing, configu- ration management, centralized log monitoring and security), were deployed in separate dedi- cated virtual machines as single Java processes. • Process: Apache JMeter was used to invoke the FlightInfoService composite service. The three scenarios (Spring Cloud Netflix-Synch, Spring Cloud Netflix Asynch and Vert.x) were executed serially in the same environment with equivalent components running on the same virtual machines. • Configuration settings: Composite and core ser- vices including all governance components were executed with their default configuration settings. Our Technology Stack We implemented this microservice ecosystem using two technology stacks, with each adopting different message exchange patterns. • We chose Spring Cloud Netflix for implement- ing the scenario where the composite service invokesthecoreservicessynchronously.Spring Cloud Netflix provides Netflix OSS integrations for Spring Boot apps through auto-configu- ration and binding to the Spring Environment and other Spring programming model idioms. The stack includes implementations for service discovery (Eureka), circuit breaker (Hystrix), intelligent routing (Zuul) and client-side load bal- ancing (Ribbon) (see Figure 3). DISCOVERY Eureka CENTRALIZED CONFIGURATION Spring Cloud Config LOG MONITORING ELK CALL TRACING Zipkin SECURITY OAuth 2.0 SCHEDULE (S1) STATUS (S2) AIRCRAFT (S3) AIRPORT (S4) AIRLINE (S5) MC MC MC MC EDGESERVER Zuul COMPOSITE CORE MC MC FLIGHT INFO (C1) LOAD BALANCER / CIRCUIT BREAKER Ribbon / Hystrix Spring Cloud Netflix Implementation Figure 3
  • 8. Cognizant 20-20 Insights 8High Performance Reactive Microservices Architecture | • We used Vert.x for implementing the reactive and asynchronous message exchange pattern between composite and core services (see Figure 4). The microservices were implemented as Vert.x Verticles. Details of how we imple- mented the microservices are discussed above. Reactive microservice architecture implementa- tion using Vert.x has the following characteristics, some of which are positively desirable for enter- prise-scale microservices adoption: • All components in the Vert.x microservice eco- system (including the core and composite microservices and the different operational com- ponents) communicate with each other through the Event Bus implemented using a Hazelcast cluster. Internal service communication (i.e., between composite and core services) uses Async RPC, resulting in significant throughput improve- ment over synch HTTP, which is the default in Spring Cloud Netflix (see Figure 5, next page). • Vert.x supports polyglot programming, imply- ing that individual microservices can be written in different languages including Java, JavaScript, Groovy, Ruby and Ceylon. • Vert.x provides automatic fail-over and load balancing out of the box. • Vert.x is a toolkit enabling many components of a typical microservices reference archi- tecture to be built. The microservices are deployed as Verticles. GAUGING THE TEST RESULTS Figure5showsthethroughputcomparisonbetween Spring Cloud Netflix (Sync and Async-REST) (as per Figure 3) and the Vert.x implementation (as per Figure 4) in our lab environment.21 Observations • Spring Cloud Netflix–Sync implementation: We observed that the number of requests pro- cessed per second was the lowest of the three scenarios. This can be explained by the fact that the server request thread pools were get- DISCOVERY Vert.x CENTRALIZED CONFIGURATION Hazelcast LOG MONITORING ELK CALL TRACING Zipkin SECURITY OAuth 2.0 MC MC MC MC EDGESERVER Vert.xWeb COMPOSITE CORE MC MC SCHEDULE (S1) STATUS (S2) AIRCRAFT (S3) AIRPORT (S4) AIRLINE (S5) S1 S2 S3 S4 S5 Event Bus Hazelcast Event Bus Hazelcast Composite Service Address FLIGHT-INFO CIRCUIT BREAKER Vert.x C0 Cn CoreServiceAddressSn MicroservicesContainerMC Our Vert.x Implementation Figure 4
  • 9. Cognizant 20-20 Insights 9High Performance Reactive Microservices Architecture | ting exhausted as the composite to core service invocations were synchronous and blocking. The processor and memory utilization were not significant. The core and composite ser- vices were blocking in I/O operations. • Spring Cloud Netflix–Async-REST imple- mentation: The Asynch API of the Spring Framework and RxJava help to increase the maximum requests processed per second. This is because the composite and core service implementations used the Spring Framework’s alternative approach to asyn- chronous request processing. • Vert.x implementation: We observed that the number of requests processed per second was the highest of the three scenarios. This is primarily due to Vert.x’s nonblocking and event-driven nature. Vert.x uses an actor- based concurrency model inspired by Akka. Vert.x calls this the multi-reactor pattern which uses all available cores on the virtual machines and uses several event loops. The edge server is based on the Vert.x Web module which is also nonblocking. Composite to core service invocations use Async RPC instead of traditional synchronous HTTP, which results in high throughput. Even Vert.x JDBC clients use asynchronous API to interact with data- bases which are faster compared to blocking JDBC calls. Overall, Vert.x scales better with the increase of virtual users. LOOKING FORWARD Reactive principles are not new. They have been used, proven and hardened over many years. Reac- tive microservices effectively leverages these ideas with modern software and hardware platforms to deliver loosely-coupled, single-responsibility and autonomous microservices using asynchronous message passing for interservices communication. Our research and use case implementation highlight the feasibility of adopting reactive microservices as a viable alternative to imple- ment a microservices ecosystem for enterprise applications, with Vert.x emerging as a compelling reactive toolkit. Our work underscored the greater operational efficiencies that can be obtained from reactive microservices. 0 0 100 200 300 VIRTUAL USERS Throughput(requests/sec) Spring Cloud Netflix vs. Vert.x 400 500 600 1200 1000 800 600 400 200 Spring Cloud (Sync) Spring Cloud (Async-REST) Vertx Throughput Comparison Figure 5
  • 10. Cognizant 20-20 Insights 10High Performance Reactive Microservices Architecture | FOOTNOTES 1 Microservices, http://www.martinfowler.com/articles/microservices.html 2 Ibid. 3 The Reactive Manifesto, http://www.reactivemanifesto.org/ 4 Node.js, https://nodejs.org/en/ 5 QBit, https://github.com/advantageous/qbit 6 Spring Cloud Netflix, https://cloud.spring.io/spring-cloud-netflix/ 7 Vert.x, http://vertx.io/ 8 Erlang, https://www.erlang.org/ 9 Akka, http://akka.io/ 10 Hazelcast, https://hazelcast.org/ 11 JGroups, http://jgroups.org/ 12 Kubernetes, https://kubernetes.io/ 13 Docker Links, https://docs.docker.com/docker-cloud/apps/service-links/ 14 Consul, https://www.consul.io/ 15 Redis, https://redis.io/ 16 Hazelcast Map, http://docs.hazelcast.org/docs/3.5/manual/html/map.html 17 Netflix Archaius, https://github.com/Netflix/archaius 18 Hazelcast Distributed Map, http://docs.hazelcast.org/docs/3.5/manual/html/map.html 19 Dropwizard, http://www.dropwizard.io/1.1.0/docs/ 20 Hawkular, https://www.hawkular.org/ 21 Load Generation Tool: JMeter (Single VM) [8 Core, 8 GB RAM]; Core Services: Each core service is deployed in a separate VM with 4 Core and 4 GB RAM; Composite Service: Deployed in a separate VM with 4 core and 16 GB RAM; Edge Server: Deployed in a separate VM with 4 core and 16 GB RAM; Operational Components: Deployed in separate VM with 4 core and 16 GB RAM.
  • 11. Cognizant 20-20 Insights 11High Performance Reactive Microservices Architecture | Dipanjan Sengupta Chief Architect, Software Engineering and Architecture Lab Dipanjan Sengupta is a Chief Architect in the Software Engineering and Architecture Lab of Cognizant’s Global Technology Office. He has extensive experience in service-oriented integration, integra- tion of cloud-based and on-premises applications, API management and microservices-based architecture. Dipanjan has a post-graduate degree in engineering from IIT Kanpur. He can be reached at Dipanjan. Sengputa@cognizant.com. Hitesh Bagchi Principal Architect, Software Engineering and Architecture Lab Hitesh Bagchi is a Principal Architect in the Software Engineering and Architecture Lab of Cognizant’s Global Technology Office. He has extensive experience in service-oriented architecture, API manage- ment and microservices-based architecture, distributed application development, stream computing, cloud computing and big data. Hitesh has a B.Tech. in engineering from University of Calcutta. He can be reached at Hitesh.Bagchi@cognizant.com. Satyajit Prasad Chainy Senior Architect, Software Engineering and Architecture Lab Satyajit Prasad Chainy is a Senior Architect in the Software Engineer- ing and Architecture Lab of Cognizant’s Global Technology Office. He has extensive experience with J2EE related technologies. Satyajit has a B.E. in computer science from University of Amaravati. He can be reached at Satyajitprasad.Chainy@cognizant.com. Pijush Kanti Giri Architect, Software Engineering and Architecture Lab Pijush Kanti Giri is an Architect in the Software Engineering and Archi- tecture Lab of Cognizant’s Global Technology Office. He has extensive experience in Eclipse plugin architecture and developing Eclipse RCP applications, API management and microservices-based architecture. Pijush has a B.Tech. in computer science from University of Kalyani. He can be reached at Pijush-Kanti.Giri@cognizant.com. ABOUT THE AUTHORS
  • 12. World Headquarters 500 Frank W. Burr Blvd. Teaneck, NJ 07666 USA Phone: +1 201 801 0233 Fax: +1 201 801 0243 Toll Free: +1 888 937 3277 European Headquarters 1 Kingdom Street Paddington Central London W2 6BD England Phone: +44 (0) 20 7297 7600 Fax: +44 (0) 20 7121 0102 India Operations Headquarters #5/535 Old Mahabalipuram Road Okkiyam Pettai, Thoraipakkam Chennai, 600 096 India Phone: +91 (0) 44 4209 6000 Fax: +91 (0) 44 4209 6060 © Copyright 2017, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any means,electronic, mechanical, photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein is subject to change without notice. All other trademarks mentioned herein are the property of their respective owners. TL Codex 2654 ABOUT COGNIZANT Cognizant (NASDAQ-100: CTSH) is one of the world’s leading professional services companies, transforming clients’ business, operating and technology models for the digital era. Our unique industry-based, consultative approach helps clients envision, build and run more innova- tive and efficient businesses. Headquartered in the U.S., Cognizant is ranked 205 on the Fortune 500 and is consistently listed among the most admired companies in the world. Learn how Cognizant helps clients lead with digital at www.cognizant.com or follow us @Cognizant.