The document discusses real-time fraud detection patterns and architectures. It provides an overview of key technologies like Kafka, Flume, and Spark Streaming used for real-time event processing. It then describes a high-level architecture that focuses first on near real-time processing using technologies like Kafka and Spark Streaming for initial event processing before completing the picture with micro-batching, ingestion, and batch processing.
This gives me a lot of perspective regarding the use of Hadoop
Topics are partitioned, each partition ordered and immutable. Messages in a partition have an ID, called Offset. Offset uniquely identifies a message within a partition
Kafka retains all messages for fixed amount of time.
Not waiting for acks from consumers.
The only metadata retained per consumer is the position in the log – the offset
So adding many consumers is cheap
On the other hand, consumers have more responsibility and are more challenging to implement correctly
And “batching” consumers is not a problem
3 partitions, each replicated 3 times.
The choose how many replicas must ACK a message before its considered committed.
This is the tradeoff between speed and reliability
The choose how many replicas must ACK a message before its considered committed.
This is the tradeoff between speed and reliability
can read from one or more partition leader. You can’t have two consumers in same group reading the same partition.
Leaders obviously do more work – but they are balanced between nodes
We reviewed the basic components on the system, and it may seem complex. In the next section we’ll see how simple it actually is to get started with Kafka.