Organizations are quickly adopting microservice architectures to achieve better customer service and improve user experience while limiting downtime and data loss. However, transitioning from a monolithic architecture based on stateful databases to truly stateless microservices can be challenging and requires the right set of solutions.
In this webinar, learn from field experts as they discuss how to convert the data locked in traditional databases into event streams using HVR and Apache Kafka®. They will show you how to implement these solutions through a real-world demo use case of microservice adoption.
You will learn:
-How log-based change data capture (CDC) converts database tables into event streams
-How Kafka serves as the central nervous system for microservices
-How the transition to microservices can be realized without throwing away your legacy infrastructure
2. Housekeeping
Items
✓ Sound check
✓ All listeners are on mute
✓ You can enter your
questions at any time in
the “Questions” box
✓ Questions will be
answered privately during
the webinar or during the
Q&A
3. Webinar Hosts
Joe deBuzna
VP, Field Engineering
HVR
Chong Yan
Solutions Architect
Confluent
Simon Whitworth
Senior Director,
Solutions Architect
HVR
4. • Who We Are
• Understanding
Microservices
• Case Study
• Demonstration
• Q&A
Agenda
5. What is HVR?
HVR moves high volumes of data
to and from a variety of sources
and targets for real-time analytics.
6. Performance—Comprehensive and Efficient
Less than 100MB –
our streamlined
download* includes
everything
STREAMLINED
Heterogeneous initial
loads, schema
creation, auto
mapping
INITIAL LOADS &
DDL REPLICATION
Changes captured
efficiently for real-
time updates
CHANGE DATA
CAPTURE
Database data
validation and repair
for data accuracy and
assurance
VALIDATION
AND REPAIR
Graphical real-time
monitoring, alerting
and scheduling
MONITORING
*Single download per OS (Windows, Unix, Linux, Mac). Download for Mac is only for HVR User Interface
All-In-One Box Solution
8. 8
Kafka is messaging reimagined as a distributed commit log
It enables guaranteed pub/sub messaging with exactly once semantics
It can scale horizontally to trillions of messages per day with Petabytes of storage on commodity
hardware and within the Cloud
It enables stream processing with Java or SQL and has an ecosystem of over 100 resilient
connectors to other systems
It ensures compatibility with schema evolution for upstream and downstream systems
You can replay messages from any point in time, allowing batch and stream processing to co-exist
along with event auditing
What is Kafka?
Analogy: Kafka is like Tivo (DVR) with unlimited storage & more
intelligence.
10. 10
+ Distributed clustered storage
Kafka is a blend of messaging, stream processing, ETL and
modern database designs built around a distributed log
+ Streaming platform
Pub/Sub
Messaging
ETL
Connectors
Spark
Flink
Beam
IBM MQ
TIBCO
RabbitMQ
Mulesoft
Talend
Informatica
Kafka is Much More than Messaging
+ Exactly Once
+ Designed for the Cloud
+ Inter DC replication
+ Schema evolution
Stream
Processing
12. 12
Data Opportunities and Challenges within the Enterprise
● Digital transformation and the
value of data
● Data is more diverse than it was
15 years ago (e.g. event data,
unstructured data)
● Explosion of specialized data
systems
● Point to point integrations are
time consuming and brittle
● ETL needs to be faster
● Clean data is often locked away
in a Data Warehouse
How can we build reliable and timely data flow throughout all the data
systems in an organization?
Data
Warehouse
Hadoop
NoSQL
Oracle
SFDC
Logging
Bloomberg
…any sink/source
…and more
OLTP
ActiveM
Q
App App
Caches
OLTP OLTPAppAppApp
Web Custom Apps Microservices Monitoring Analytics
13. 13
Old World: REST Based Microservices Interconnect
GUI
UI Service Orders Returns
Pay Fulfilment
Stock
17. 17
Buying an iPad (with REST)
- Orders Service calls Shipping
Service to tell it to ship item.
- Shipping service looks up
address to ship to (from
Customer Service)
Submit
Order
shipOrder() getCustomer()
Orders
Service
Shipping
Service
Customer
Service
Webserver
18. 18
Buying an iPad with Events for Notification
Message Broker
(Kafka)
Submit
Order
Order
Created
getCustomer()
REST
Notification
Orders
Service
Shipping
Service
Customer
Service
Webserver
KAFKA
- Orders Service no longer
knows about the Shipping
service (or any other service).
Events are fire and forget.
20. 20
Event Streams are the Key to Scalable Service Ecosystems
Sender has no knowledge of
who consumes the event they
send. This decouples the
system.
Orders
Service
21. 32
• Convert Legacy Databases Into Events With CDC
• Broadcast events
• Retain them in the log
• Evolve the event-stream with services built as streaming
functions
• Recast into views when you need to query
Event Driven Services
22. JOINT WEBINAR | Confluent + VRJOINT WEBINAR | Confluent + HVR
MICROSERVICES CASE STUDY:
Regional Airline Requires Real-Time Information To
Improve Employee Services
23. Business Challenge
• Flight crew personnel had to
laboriously check the operation
application for scheduling updates.
• IT tasked with providing better service
to flight crew with push notifications
for scheduling updates.
• This must not negatively affect existing
operations, cannot be expensive, and
needs to be implemented as soon as
possible.
24. 1. Data must be pushed to the flight crew in as close to real-time as
possible.
1. The large monolithic system that manages scheduling, which
involves flight bidding, rest requirements, hotel stays, training
dates, and more, was already near capacity.
Technical Requirements
25. Architecture: In the Beginning
Core Business Operations
SQL Server SybaseOracle
• Standard legacy monolithic architecture
• Greater capacity means lengthy and expensive hardware architecture upgrades
26. Push Notifications: Take One
Core Business Operations
SQL Server SybaseOracle
Scheduling
Services
• Tremendous effort in trying to squeeze more capabilities out of existing system
• Results: Goals not achieved. Either overload the system or send updates a couple
times a day
27. 3. A new decoupled service architecture will be used for the
scheduling app
4. Operational database will be integrated via low impact log-
based CDC (change data capture) technology sending real-time
changes to database copies and as time series data to a
stream processing platform
5. Additional data lookups done through the database copies
More Technical Requirements
29. HVR’s log-based change data capture
provided the right combination of light touch
with performance required to tap into the
various source systems without affecting
existing operations.
The ability to replicate changes to multiple
locations with the flexibly to enrich and
convert all changes into a time series allowed
Kafka to seamlessly integrate with the legacy
architecture unleashing a new era in
innovation.
HVR’s Role
30. The premier stream processing platform,
Kafka is at the core of the new microservices
implementation.
Data sent by HVR is processed in real-time
and the enriched results are instantly read by
the services and delivered to the flight crew.
Implementing Kafka today sets the stage for
the next generation of scalable microservices
for tomorrow.
Kafka’s Role
31. The Result: Innovation Unleashed
• Customer service exceeded
goals
• Innovative new micro services
currently being implemented
• A world of new possibilities
now being discussed
Oracle
Core Business Operations
SQL Server Sybase
HVR HVR
Additional
Services
Scheduling
Services
Kafka
HVR
}
Presentation
lookups
32. • HRIS - Integrate into Kafka flow the
HR Info System (ERP done today)
• Financial System Integration
• Maintenance
• (2nd base by airline’s volume)
What’s Next