Typesafe did a survey of Spark usage last year and found that a large percentage of Spark users combine it with Cassandra and Kafka. This talk focuses on streaming data scenarios that demonstrate how these three tools complement each other for building robust, scalable, and flexible data applications. Cassandra provides resilient and scalable storage, with flexible data format and query options. Kafka provides durable, scalable collection of streaming data with message-queue semantics. Spark provides very flexible analytics, everything from classic SQL queries to machine learning and graph algorithms, running in a streaming model based on "mini-batches", offline batch jobs, or interactive queries. We'll consider best practices and areas where improvements are needed.
4. 4
About
Online Sportsbook and Gaming provider
• Every day we push more than 5
millions price changes
• 160TB of data flowing through our
platform each day
7. 7
Big Data Circa 2010
Generally two camps. One was the offline, batch-mode processing of massive data sets done with Hadoop.
8. 8
Big Data Circa 2010
Akka
The other was the online, real-time processing and storage of data of “transactional” data at scale, as exemplified by Cassandra for the data store and middleware tools
and libraries like Akka, Spring, etc.
9. 9
Big Data Circa 2010
Akka
?
Two camps together with some overlap and connectivity, but not a lot.
11. 11
Big Data Circa 2015
We still have this:
Akka
?
Five years later (this year), we still have these architectures in wide use, but…
12. 12
Big Data Circa 2015
But now we have this:
Big Data
Streaming
Mesos, EC2, or Bare
A new, streaming-oriented architecture is emerging, which can also be used for batch mode analysis, if we process resident data sets as finite streams.
13. Topic A
General Principles
• Spark Streaming: Analytics/aggregations
• C*: Storage, queries
• Kafka: durable message store; allows
replay of messages lost downstream.
Spark Streaming provides rich analytics.
Need a durable system of record, like Kafka, which allows repeat reads in case of loss. See https://medium.com/@foundev/real-time-analytics-with-spark-streaming-and-
cassandra-2f90d03342f7 for a nice summary of design patterns and tips.
15. Mesos, EC2, or Bare Metal
15
Cassandra remains the flexible, scalable datastore suitable for scalable ingesting of streaming data, such as event streams (e.g., click streams from web apps) and logs.
16. Mesos, EC2, or Bare Metal
16
Kafka is growing popular as a tool for durable ingestion of diverse event streams with partitioning for scale and organization into topics (like a typical message queue) for
downstream consumers.
17. Service 1
Log &
Other Files
Internet
Services
Service 2
Service 3
Services
Services
N * M links ConsumersProducers
One use of Kafka is to solve the problem of N*M direct links between producers and consumers. This is hard to manage and it couples services to directly, which is
fragile when a given service needs to be scaled up through replication or replacement and sometimes in the protocol that both ends need to speak.
18. Service 1
Log &
Other Files
Internet
Services
Service 2
Service 3
Services
Services
N + M links ConsumersProducers
So Kafka can function as a central hub, yet it’s distributed and scalable so it isn’t a bottleneck or single point of failure.
19. n+5
n+4
n+3
n+2
n+1
n
Consumer 1
Producer 1
Producer 2
n+?
n+?
Consumer 2
Kafka Usage
Topic A
The message queue structure looks basically like this. Where different producers can write to append messages to a topic and different consumers can read the
messages in the queue at their own pace, in order.
20. Kafka Resiliency
Data loss downstream? Can replay lost
messages.
Could use C* for this, but then you’ve changed the read/write load (and hence tuning, scaling, etc. of your C* ring).
21. Mesos, EC2, or Bare Metal
21
The third element of the “troika” is Spark, the next generation, scalable compute engine that is replacing MapReduce in Hadoop. However, Spark is flexible enough to run
in many cluster configurations, including a local mode for development, a simple standalone cluster mode for simple scenarios, Mesos for general scalability and
flexibility, and integrated with Cassandra itself.
22. Topic A
Spark Streaming Dos/Don’ts
Do
• Use for rich analytics and aggregations.
• Use with Kafka/C* source if data loss not
tolerable. Or, use the write ahead log
(WAL) - less optimal.
Spark Streaming offers rich analytics, even SQL, machine learning, and graph representations. It’s a more complex engine, so there is more “room” for data loss. Hence,
use Kafka or C* for durability and replay capabilities, but if you do ingest data directly from other sources without replay capability, at least use the WAL.
23. Topic A
Spark Streaming Don’ts
Don’t
• Use for counting (use C*).
• Low-latency, per-event processing.
C* is faster and more accurate for counting, because repeat execution of Spark tasks (for error recovery, speculative execution, etc.) will cause over-counting (e.g., using
the “aggregator” feature). Also, Spark is a mini-batch system, for processing time slices of events (down to ~1 sec.). If you need low-latency and/or per-event processing,
use Akka…
24. Mesos, EC2, or Bare Metal
24
Other parts of complete infrastructure include a distributed file system like CSFv2, when you don’t need a full database, e.g., for logs that you’ll dump into the file system
and then process in batches later on with Spark.
25. Mesos, EC2, or Bare Metal
25
Typesafe Reactive Platform provides infrastructure tools for integrating these and other components, including Akka Streams for resilient, low-latency event processing
(based on the Reactive Streams standard for streams with dynamic back pressure), ConductR for orchestrating services, and Play for web services and consoles.
26. Topic A
Typesafe Reactive Platform
• Akka Streams: low-latency, per-event
processing.
• ConductR for orchestrating services.
• Play for web services, consoles.
• … and commercial Spark support.
Akka Streams implements the Reactive Streams standard for streams with dynamic back pressure. It sits on top of the more general Akka Actor framework for highly
distributed concurrent applications.
Typesafe offers commercial support for development teams developing advanced Spark applications. We offer production runtime support for Spark running on Mesos
clusters.
27. Mesos, EC2, or Bare Metal
27
Finally, there’s a wealth of cluster systems possible. You could deploy these tools on your servers for you Cassandra Ring, which has an excellent integration with Spark.
You can run in EC2 or bare metal. You can use a general-purpose cluster management system like Mesos.
28. Presented by Patrick Di Loreto
R&D Engineering Lead
Site: https://developer.williamhill.com
Twitter: https://twitter.com/patricknoir
OMNIA
Distributed & Reactive
platform for data management
29. Motivations
29Omnia: Distributed & Reactive platform for data management
Users
Feeds
System
3
Party
In order to be in a position to innovate we need to control and
understand our data
Social
Networks
IoT
William Hill
Need
for
control
over
the
data
30. DMP based on the Lambda architecture and the Reactive principles
What is Omnia?
30
Chronos
DataSource
NeoCortex
Speed Layer
Fates
Batch Layer
Hermes
ServingLayer
Data Flow
Input Output
Omnia: Distributed & Reactive platform for data management
Lambda
architecture
32. Chronos is a reliable and scalable component which collect data from different
sources and organize them into Streams of observable events.
Chronos: Data acquisition
32
Incident: {
type: “bet”,
version: “1.0”,
time: “2015-09-03 06:00:10”,
acquisitionTime: “2015-09-03 06:00:06”,
source: “BetSystem”,
payload: {…. Any valid JSON}
}
Omnia: Distributed & Reactive platform for data management
Chronos
DataSource
TCP
HTTP
WS
…
JMS
HTTP
Poll
SSE
Adapter
Streams
Converter Persistence
BetsDeposits
Prices
Stream = Adapter + Converter + Persistence
33. Chronos: Data acquisition
33Omnia: Distributed & Reactive platform for data management
Chronos 1
(SSE, Bets placed)
Chronos 2
(JMS, Deposits)
Chronos 3
(HTTP, Events)
Chronos N
(SSE, Twitter)
….…
Chronos 2
(JMS, Deposits)
(SSE, Bet Placed)
34. High throughput distributed messaging system
• Highly Availability
• Efficiency
• Durable
Chronos: Why Kafka
Kafka
is
a
high-‐throughput
distributed
messaging
system
Design
Principles:
Highly
Available:
Replicated
Distributed
High
throughput:
Stateless
Broker
Efficiency:
Disk
Efficiency
:
“Don’t
fear
the
file
system”
–
modern
OSs
optimize
sequential
disk
operations/disk
caching
strategy
Usage
of
OS
filesystem
cache
rather
than
application
level
cache:
More
efficient
(no
usage
of
GC)
Survive
on
application
restart
I/O
Efficiency
:
Batching
–
Reduces
small
I/O
operations,
this
mortize
network
roundtrip
overhead,
enhance
larger
sequential
disk
operations
Durable
35. Fates represents the long term memory of Omnia. It organizes the incidents that
Chronos collected into timelines and also elaborates new information as views by
using machine learning, logical reasoning and time series analysis.
Fates: Batch layer
35Omnia: Distributed & Reactive platform for data management
Customer: 123
Login
Deposit
Bet placed
…
Logout
Event: 78
Started
Fault
Penalty
…
Goal
Timelines & Views
Bets Deposits
Events Session
Fates
Batch Layer
37. Fates: Cassandra
Cassandra is the long term storage for our data.
• Highly Available (CAP)
• Linear Scalability
• Multi DC – Separation of Concerns (Production and Analytic DCs)
• High performance and optimal for WRITE operations
38. NeoCortex represents the short term memory of Omnia. It offers a framework to
develop micro services on top of Apache Spark. It performs fast and real time data
processing with the data acquired from Chronos and Fates.
NeoCortex: Speed layer
38Omnia: Distributed & Reactive platform for data management
NeoCortex
BetsDeposits
EventsSession
Micro Services
Output
39. Hermes is a scalable and full duplex communication for B2C and B2B.
Hermes: Serving Layer
39Omnia: Distributed & Reactive platform for data management
B2C
Browser
B2B
Loadbalancer
Push
Server
Distribute
Cache
Push
Server
Push
Server
…
TCP
WS
HTTP
JSAPI
WH
Apps
Cache
Cache
Apps
40. Custom advert, bonus, data load prediction, bot detection...
Omnia Data Flow
40
Chronos
DataSource
NeoCortex
Speed Layer
Fates
Batch Layer
Hermes
ServingLayer
Input Output
Omnia: Distributed & Reactive platform for data management
Users become a new data producer
41. Real time monitoring and elasticity
Docker and Mesos: Scale In&Out based on demand,
Omnia on Omnia
41
Chronos
DataSource
NeoCortex
Speed Layer
Fates
Batch Layer
Hermes
ServingLayer
Input Output
Omnia: Distributed & Reactive platform for data management
JMX
JMX
JMX