3. SCALING
MONOLITHIC
APPS
Payments Adult
E-‐Learning Entertainment
Music Search
Payments Adult
E-‐Learning Entertainment
Music Search
Payments Adult
E-‐Learning Entertainment
Music Search
Payments Adult
E-‐Learning Entertainment
Music Search
12. Let’s
use
asynchronous
processing
I
shall
not
block
incoming
requests
to
keep
serving
There’s
latency
between
each
remote
call
13. Let’s
use
asynchronous
processing
I
shall
not
block
incoming
requests
to
keep
serving
Thread
Executor
!
There’s
latency
between
each
remote
call
14. private ExecutorService threadPool = Executors.newFixedThreadPool(2);
final List<T> batches = new ArrayList<T>();
//…
Callable<T> t = new Callable<T>() {
public T run() {
T result = callDatabase();
synchronized(batches) {
batches.add(result);
return result;
}
}
};
Future<T> f = threadPool.submit(t);
T result = f.get()
Vanilla
background-‐processing
15. private ExecutorService threadPool = Executors.newFixedThreadPool(2);
final List<T> batches = new ArrayList<T>();
//…
Callable<T> t = new Callable<T>() {
public T run() {
T result = callDatabase();
synchronized(batches) {
batches.add(result);
return result;
}
}
};
Future<T> f = threadPool.submit(t);
T result = f.get()
New
Allocation
By
Request
Vanilla
background-‐processing
16. private ExecutorService threadPool = Executors.newFixedThreadPool(2);
final List<T> batches = new ArrayList<T>();
//…
Callable<T> t = new Callable<T>() {
public T run() {
T result = callDatabase();
synchronized(batches) {
batches.add(result);
return result;
}
}
};
Future<T> f = threadPool.submit(t);
T result = f.get()
New
Allocation
By
Request
Queue-‐based
message
passing
Vanilla
background-‐processing
17. private ExecutorService threadPool = Executors.newFixedThreadPool(2);
final List<T> batches = new ArrayList<T>();
//…
Callable<T> t = new Callable<T>() {
public T run() {
T result = callDatabase();
synchronized(batches) {
batches.add(result);
return result;
}
}
};
Future<T> f = threadPool.submit(t);
T result = f.get()
New
Allocation
By
Request
Queue-‐based
message
passing
Vanilla
background-‐processing
What
if
this
message
fails
?
18. private ExecutorService threadPool = Executors.newFixedThreadPool(2);
final List<T> batches = new ArrayList<T>();
//…
Callable<T> t = new Callable<T>() {
public T run() {
T result = callDatabase();
synchronized(batches) {
batches.add(result);
return result;
}
}
};
Future<T> f = threadPool.submit(t);
T result = f.get()
New
Allocation
By
Request
Queue-‐based
message
passing
Vanilla
background-‐processing
What
if
this
message
fails
?
censored
19. Mixing
latency
with
queue
based
handoff
http://ferd.ca/queues-don-t-fix-overload.html
20. Mixing
latency
with
queue
based
handoff
http://ferd.ca/queues-don-t-fix-overload.html
21. Mixing
latency
with
queue
based
handoff
http://ferd.ca/queues-don-t-fix-overload.html
27. Re-‐using
Threads
too
:
Event
Loop
I’m an event loop, consuming
messages in the right order
28. //RingBufferProcessor with 32 slots by default
RingBufferProcessor<Integer> processor =
RingBufferProcessor.create();
//Subscribe to receive events
processor.subscribe(
//Create a subscriber from a lambda/method ref
SubscriberFactory.unbounded((data, s) ->
System.out.println(data)
)
);
//Dispatch data asynchronously
int i = 0;
while(i++ < 100000) processor.onNext(i)
//Terminate the processor
processor.shutdown();
29. //a second subscriber to receive the same events in a
distinct thread
processor.subscribe(
SubscriberFactory.unbounded((data, s) -> {
//a slow callback returning false when not interested
in data anymore
if(!sometimeSlow(data)){
//shutdown the consumer thread
s.cancel();
}
})
);
30.
31. Hold
on
!
The
guy
said
bounded
number
of
slots
32. So
we
still
block
when
the
buffer
is
full!
Hold
on
!
The
guy
said
bounded
number
of
slots
33. So
we
still
block
when
the
buffer
is
full!
…So
why
sending
more
requests
Hold
on
!
The
guy
said
bounded
number
of
slots
34. So
we
still
block
when
the
buffer
is
full!
…So
why
sending
more
requests
Reactive
Streams!
Hold
on
!
The
guy
said
bounded
number
of
slots
53. RingBufferProcessor<Integer> processor =
RingBufferProcessor.create();
//Subscribe to receive event […]
//Data access gated by a Publisher with backpressure
PublisherFactory.forEach(
sub -> {
if(sub.context().hasNext())
sub.onNext(sub.context().readInt());
else
sub.onComplete();
},
sub -> sqlContext(),
context -> context.close()
)
.subscribe(processor);
//Terminate the processor [..]
54. RingBufferProcessor<Integer> processor =
RingBufferProcessor.create();
//Subscribe to receive event […]
//Data access gated by a Publisher with backpressure
PublisherFactory.forEach(
sub -> {
if(sub.context().hasNext())
sub.onNext(sub.context().readInt());
else
sub.onComplete();
},
sub -> sqlContext(),
context -> context.close()
)
.subscribe(processor);
//Terminate the processor [..]
Connect
processor
to
this
publisher
and
start
requesting
55. RingBufferProcessor<Integer> processor =
RingBufferProcessor.create();
//Subscribe to receive event […]
//Data access gated by a Publisher with backpressure
PublisherFactory.forEach(
sub -> {
if(sub.context().hasNext())
sub.onNext(sub.context().readInt());
else
sub.onComplete();
},
sub -> sqlContext(),
context -> context.close()
)
.subscribe(processor);
//Terminate the processor [..]
Connect
processor
to
this
publisher
and
start
requesting
For
the
new
connected
processor,
create
some
sql
context
56. RingBufferProcessor<Integer> processor =
RingBufferProcessor.create();
//Subscribe to receive event […]
//Data access gated by a Publisher with backpressure
PublisherFactory.forEach(
sub -> {
if(sub.context().hasNext())
sub.onNext(sub.context().readInt());
else
sub.onComplete();
},
sub -> sqlContext(),
context -> context.close()
)
.subscribe(processor);
//Terminate the processor [..]
Connect
processor
to
this
publisher
and
start
requesting
For
the
new
connected
processor,
create
some
sql
context
Keep
invoking
this
callback
until
there
is
no
more
pending
request
58. And
everything
in
a
controlled
fashion
What
about
combining
multiple
asynchronous
calls
59. And
everything
in
a
controlled
fashion
Including
Errors
and
Completion
What
about
combining
multiple
asynchronous
calls
60. And
everything
in
a
controlled
fashion
Including
Errors
and
Completion
Reactive
Extensions
!
What
about
combining
multiple
asynchronous
calls
61. FlatMap
and
Monads..
Nooooo
Please
No
24
Streams.just(‘doge’).flatMap{ name ->
Streams.just(name)
.observe{ println 'so wow' }
.map{ 'much monad'}
}.consume{
assert it == 'much monad'
}
62. FlatMap
and
Monads..
Nooooo
Please
No
24
Streams.just(‘doge’).flatMap{ name ->
Streams.just(name)
.observe{ println 'so wow' }
.map{ 'much monad'}
}.consume{
assert it == 'much monad'
}
A publisher that only sends “doge” on request
63. FlatMap
and
Monads..
Nooooo
Please
No
24
Streams.just(‘doge’).flatMap{ name ->
Streams.just(name)
.observe{ println 'so wow' }
.map{ 'much monad'}
}.consume{
assert it == 'much monad'
}
A publisher that only sends “doge” on request
Sub-Stream definition
64. FlatMap
and
Monads..
Nooooo
Please
No
24
Streams.just(‘doge’).flatMap{ name ->
Streams.just(name)
.observe{ println 'so wow' }
.map{ 'much monad'}
}.consume{
assert it == 'much monad'
}
A publisher that only sends “doge” on request
Sub-Stream definition
All Sub-Streams are merged under a single sequence
65. Scatter
Gather
and
Fault
Tolerance
25
Streams.merge(
userService.filteredFind(“Rick"),
//
Stream
of
User
userService.filteredFind(“Morty")
//
Stream
of
User
)
.buffer()
//
Accumulate
all
results
in
a
List
.retryWhen(
errors
-‐>
//Stream
of
Errors
errors
.zipWith(Streams.range(1,3),
t
-‐>
t.getT2())
.flatMap(
tries
-‐>
Streams.timer(tries)
)
)
.consume(System.out::println);
66. Scatter
Gather
and
Fault
Tolerance
25
Interleaved merge from 2 upstream publishers
Streams.merge(
userService.filteredFind(“Rick"),
//
Stream
of
User
userService.filteredFind(“Morty")
//
Stream
of
User
)
.buffer()
//
Accumulate
all
results
in
a
List
.retryWhen(
errors
-‐>
//Stream
of
Errors
errors
.zipWith(Streams.range(1,3),
t
-‐>
t.getT2())
.flatMap(
tries
-‐>
Streams.timer(tries)
)
)
.consume(System.out::println);
67. Scatter
Gather
and
Fault
Tolerance
25
Interleaved merge from 2 upstream publishers
Up to 3 tries
Streams.merge(
userService.filteredFind(“Rick"),
//
Stream
of
User
userService.filteredFind(“Morty")
//
Stream
of
User
)
.buffer()
//
Accumulate
all
results
in
a
List
.retryWhen(
errors
-‐>
//Stream
of
Errors
errors
.zipWith(Streams.range(1,3),
t
-‐>
t.getT2())
.flatMap(
tries
-‐>
Streams.timer(tries)
)
)
.consume(System.out::println);
68. Scatter
Gather
and
Fault
Tolerance
25
Interleaved merge from 2 upstream publishers
Up to 3 tries
All Sub-Streams are merged under a single sequence
Streams.merge(
userService.filteredFind(“Rick"),
//
Stream
of
User
userService.filteredFind(“Morty")
//
Stream
of
User
)
.buffer()
//
Accumulate
all
results
in
a
List
.retryWhen(
errors
-‐>
//Stream
of
Errors
errors
.zipWith(Streams.range(1,3),
t
-‐>
t.getT2())
.flatMap(
tries
-‐>
Streams.timer(tries)
)
)
.consume(System.out::println);
69. Scatter
Gather
and
Fault
Tolerance
25
Interleaved merge from 2 upstream publishers
Up to 3 tries
All Sub-Streams are merged under a single sequence
Streams.merge(
userService.filteredFind(“Rick"),
//
Stream
of
User
userService.filteredFind(“Morty")
//
Stream
of
User
)
.buffer()
//
Accumulate
all
results
in
a
List
.retryWhen(
errors
-‐>
//Stream
of
Errors
errors
.zipWith(Streams.range(1,3),
t
-‐>
t.getT2())
.flatMap(
tries
-‐>
Streams.timer(tries)
)
)
.consume(System.out::println);
Delay retry
81. 30
NetStreams.<String,
String>httpServer(spec
-‐>
spec.codec(StandardCodecs.STRING_CODEC).listen(3000)
).
ws("/",
channel
-‐>
{
System.out.println("Connected
a
websocket
client:
"
+
channel.remoteAddress());
return
somePublisher.
window(1000).
flatMap(s
-‐>
channel.writeWith(
s.
reduce(0f,
(prev,
trade)
-‐>
(trade.getPrice()
+
prev)
/
2).
map(Object::toString)
)
);
}).
start().
await();
Listen
on
port
3000
and
convert
bytes
into
String
inbound/
outbound
82. 30
NetStreams.<String,
String>httpServer(spec
-‐>
spec.codec(StandardCodecs.STRING_CODEC).listen(3000)
).
ws("/",
channel
-‐>
{
System.out.println("Connected
a
websocket
client:
"
+
channel.remoteAddress());
return
somePublisher.
window(1000).
flatMap(s
-‐>
channel.writeWith(
s.
reduce(0f,
(prev,
trade)
-‐>
(trade.getPrice()
+
prev)
/
2).
map(Object::toString)
)
);
}).
start().
await();
Listen
on
port
3000
and
convert
bytes
into
String
inbound/
outbound
Upgrade
clients
to
websocket
on
root
URI
83. 30
NetStreams.<String,
String>httpServer(spec
-‐>
spec.codec(StandardCodecs.STRING_CODEC).listen(3000)
).
ws("/",
channel
-‐>
{
System.out.println("Connected
a
websocket
client:
"
+
channel.remoteAddress());
return
somePublisher.
window(1000).
flatMap(s
-‐>
channel.writeWith(
s.
reduce(0f,
(prev,
trade)
-‐>
(trade.getPrice()
+
prev)
/
2).
map(Object::toString)
)
);
}).
start().
await();
Listen
on
port
3000
and
convert
bytes
into
String
inbound/
outbound
Upgrade
clients
to
websocket
on
root
URI
Flush
every
1000
data
some
data
with
writeWith
+
window
84. 30
NetStreams.<String,
String>httpServer(spec
-‐>
spec.codec(StandardCodecs.STRING_CODEC).listen(3000)
).
ws("/",
channel
-‐>
{
System.out.println("Connected
a
websocket
client:
"
+
channel.remoteAddress());
return
somePublisher.
window(1000).
flatMap(s
-‐>
channel.writeWith(
s.
reduce(0f,
(prev,
trade)
-‐>
(trade.getPrice()
+
prev)
/
2).
map(Object::toString)
)
);
}).
start().
await();
Listen
on
port
3000
and
convert
bytes
into
String
inbound/
outbound
Upgrade
clients
to
websocket
on
root
URI
Flush
every
1000
data
some
data
with
writeWith
+
windowClose
connection
when
flatMap
completes,
which
is
when
all
Windows
are
done
86. Now
reactive-‐streams.1.0.0
50%
guide
complete
:
http://projectreactor.io/docs/reference
reactor-‐*.2.0.3.RELEASE
-‐
2.0.4
around
the
corner
reactor-‐*.2.1.0.BUILD-‐SNAPSHOT
-‐
early
access,
no
breaking
changes
87. Now
Initial
Reactor
2
Support
in
:
Spring
Integration
4.2
Spring
Messaging
4.2
Spring
Boot
1.3
Spring
XD
1.2
Grails
3.0
88. After
Now
Spring
Integration
DSL
+
Reactive
Streams
Dynamic
Subscribers
on
Predefined
Channels
!
Best
Tool
for
each
job:
SI
for
integrating
Reactor
for
scaling
up
too
fast
to
be
true
89. SI
Java
DSL
+
Reactive
Stream
Preview
@Configuration
@EnableIntegration
public static class ContextConfiguration {
@Autowired
private TaskScheduler taskScheduler;
@Bean
public Publisher<Message<String>> reactiveFlow() {
return IntegrationFlows
.from(“inputChannel”)
.split(String.class, p -> p.split(","))
.toReactiveStreamsPublisher();
}
@Bean
public Publisher<Message<Integer>> pollableReactiveFlow() {
return IntegrationFlows
.from("inputChannel")
.split(e -> e.get().getT2().setDelimiters(","))
.<String, Integer>transform(Integer::parseInt)
.channel(Channels::queue)
.toReactiveStreamsPublisher(this.taskScheduler);
}
}
90. After
Now
Reactor
+
Spring
Cloud
•
Annotation
Driven
FastData
•
Async
IO
:
(proxy,
client)
•
Circuit
breaker,
Bus,
…
Reactor
+
Spring
XD
•
Scale
up
any
XD
pipeline
•Reactive
Backpressure
in
XD
91. 37
@EnableReactorModule(concurrency
=
5)
public
class
PongMessageProcessor
implements
ReactiveProcessor<Message,
Message>
{
@Override
public
void
accept(Stream<Message>
inputStream,
ReactiveOutput<Message>
output)
{
output.writeOutput(
inputStream
.map(simpleMap())
.observe(simpleMessage())
);
}
//…
}
92. 37
@EnableReactorModule(concurrency
=
5)
public
class
PongMessageProcessor
implements
ReactiveProcessor<Message,
Message>
{
@Override
public
void
accept(Stream<Message>
inputStream,
ReactiveOutput<Message>
output)
{
output.writeOutput(
inputStream
.map(simpleMap())
.observe(simpleMessage())
);
}
//…
}
Split
input
XD
channel
in
5
threads
!
With
a
blazing
fast
Processor
93. 37
@EnableReactorModule(concurrency
=
5)
public
class
PongMessageProcessor
implements
ReactiveProcessor<Message,
Message>
{
@Override
public
void
accept(Stream<Message>
inputStream,
ReactiveOutput<Message>
output)
{
output.writeOutput(
inputStream
.map(simpleMap())
.observe(simpleMessage())
);
}
//…
}
Split
input
XD
channel
in
5
threads
!
With
a
blazing
fast
Processor
Register
the
sequence
to
write
on
the
output
channel
after
some
operations
95. After
Reactive
IPC
for
the
JVM
<3
RxNetty
+
reactor-‐net
<3
Reactor
2.0.3.+
already
previews
the
concept
and
API
flavor…
because
REACTOR
DOESN’T
KNOW
WAITING
(LOL)
96. Future
??
• RxJava
2.0
timeline
(2016
and
after?)
• Thanks
Interop,
Reactive
Extensions
and
Naming
conventions,
it
can
converge
with
reactor-‐streams
• https://github.com/ReactiveX/RxJava/wiki/
Reactive-‐Streams
97. Take-‐away
• Distributed
System
is
the
new
cool
and
comes
at
some
cost,
two
big
ones
are
Latency
and
Failure
Tolerance
• Asynchronous
Processing
and
Error
Handling
by
Design
deal
with
these
two
problems
-‐>
Reactor,
Reactive
Extensions
/
Streams
• However
to
fully
operate,
Asynchronous
Processing
should
be
bounded
proactively
(stop-‐read)
-‐>
Reactive
Streams