Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
Good and Wicked Fairies, and the Tragedy of the Commons: Understanding the Performance of Java 8 Streams
1. Good and Wicked Fairies
The Tragedy of the Commons
Understanding the Performance
of Java 8 Streams
Kirk Pepperdine @kcpeppe
MauriceNaftalin @mauricenaftalin
#java8fairies
2. About Kirk
• Specialises in performance tuning
• speaks frequently about performance
• author of performance tuning workshop
• Co-founder jClarity
• performance diagnositic tooling
• Java Champion (since 2006)
3. About Kirk
• Specialises in performance tuning
• speaks frequently about performance
• author of performance tuning workshop
• Co-founder jClarity
• performance diagnositic tooling
• Java Champion (since 2006)
14. Agenda
• Define the problem
• Implement a solution
• Analyse performance
#java8fairies
15. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
#java8fairies
16. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
#java8fairies
17. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
18. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
19. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
20. Case Study: grep -b
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
21. Case Study: grep -b
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall bring it back to cancel half a Line
Nor all thy Tears wash out a Word of it.
rubai51.txt
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
22. Case Study: grep -b
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall bring it back to cancel half a Line
Nor all thy Tears wash out a Word of it.
rubai51.txt
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
$ grep -b 'W.*t' rubai51.txt
44:Moves on: nor all thy Piety nor Wit
122:Nor all thy Tears wash out a Word of it.
25. Obvious iterative implementation
- process file line by line
- maintain byte displacement in an accumulator variable
Objective here is to implement it in stream code
- no accumulators allowed!
Case Study: grep -b
27. Streams – Why?
• Intention: replace loops for aggregate operations
List<Person> people = …
Set<City> shortCities = new HashSet<>();
for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
10
28. Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();
for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.stream()
.map(Person::getCity)
.filter(c -> c.getName().length() < 4)
.collect(toSet());
11
we’re going to write this:
29. Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();
for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.stream()
.map(Person::getCity)
.filter(c -> c.getName().length() < 4)
.collect(toSet());
11
we’re going to write this:
30. Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();
for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.parallelStream()
.map(Person::getCity)
.filter(c -> c.getName().length() < 4)
.collect(toSet());
12
we’re going to write this:
50. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
60. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
61. Because we don’t have a problem?
Why Shouldn’t We Optimize Code?
62. Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
64. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
Why Shouldn’t We Optimize Code?
65. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
Why Shouldn’t We Optimize Code?
Demo…
66. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Why Shouldn’t We Optimize Code?
67. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
Why Shouldn’t We Optimize Code?
68. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
Why Shouldn’t We Optimize Code?
Demo…
69. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Why Shouldn’t We Optimize Code?
70. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Now we can consider the code
Else there’s a problem in the code… somewhere
- now we can go and profile it
71. Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Now we can consider the code
Else there’s a problem in the code… somewhere
- now we can go and profile it
Demo…
73. So, streaming IO is slow
If only there was a way of getting all that data into memory…
74. So, streaming IO is slow
If only there was a way of getting all that data into memory…
Demo…
75. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
76. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck OR – have a bright idea!
• Fork/Join parallelism in the real world
#java8fairies
86. What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
– 30cm = 1 light-nanosecond
We’re not going to get faster cores,
87. What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
– 30cm = 1 light-nanosecond
We’re not going to get faster cores,
we’re going to get more cores!
108. Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coveragenew spliterator coverage
MappedByteBuffer mid
109. Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coveragenew spliterator coverage
MappedByteBuffer mid
Demo…
110. Parallelizing grep -b
• Splitting action of LineSpliterator is O(log n)
• Collector no longer needs to compute index
• Result (relatively independent of data size):
- sequential stream ~2x as fast as iterative solution
- parallel stream >2.5x as fast as sequential stream
- on 4 hardware threads
112. Collectors Cost Extra!
Depends on the performance of accumulator and combiner
functions
• toList(), toSet(), toCollection() – performance
normally dominated by accumulator
• but allow for the overhead of managing multithread access to non-
threadsafe containers for the combine operation
• toMap(), toConcurrentMap() – map merging is slow.
Resizing maps, especially concurrent maps, is very expensive.
Whenever possible, presize all data structures, maps in particular.
113. Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
115. How Does It Perform?
• Total run time: 261.7 seconds
• Max: 39.2 secs, Min: 9.2 secs, Median: 22.0
secs
116. Tragedy of the Commons
Garrett Hardin, ecologist (1968):
Imagine the grazing of animals on a common ground.
Each flock owner gains if they add to their own flock.
But every animal added to the total degrades the
commons a small amount.
118. Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
119. Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
Be a good neighbor
120. Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
With many parallelStream() operations running concurrently,
performance is limited by the size of the common thread
pool and the number of cores you have
Be a good neighbor
121. Configuring Common Pool
Size of common ForkJoinPool is
• Runtime.getRuntime().availableProcessors() - 1
-Djava.util.concurrent.ForkJoinPool.common.parallelism=N
-Djava.util.concurrent.ForkJoinPool.common.threadFactory
-Djava.util.concurrent.ForkJoinPool.common.exceptionHandler
122. Fork-Join
Support for Fork-Join added in Java 7
• difficult coding idiom to master
Used internally by parallel streams
• uses a spliterator to segment the stream
• each stream is processed by a ForkJoinWorkerThread
How fork-join works and performs is important to latency
125. Fork-Join Performance
Fork Join comes with significant overhead
• each chunk of work must be large enough to amortize the
overhead
126. C/P/N/Q Performance Model
C - number of submitters
P - number of CPUs
N - number of elements
Q - cost of the operation
127. When to go Parallel
The workload of the intermediate operations must be great
enough to outweigh the overheads (~100µs):
– initializing the fork/join framework
– splitting
– concurrent collection
Often quoted as N x Q
size of data set
(typically > 10,000)
processing cost per element
128. Kernel Times
CPU will not be the limiting factor when
• CPU is not saturated
• kernel times exceed 10% of user time
More threads will decrease performance
• predicted by Little’s Law
129. Common Thread Pool
Fork-Join by default uses a common thread pool
• default number of worker threads == number of logical cores - 1
• Always contains at least one thread
Performance is tied to whichever you run out of first
• availability of the constraining resource
• number of ForkJoinWorkerThreads/hardware threads
131. Little’s Law
Fork-Join is a work queue
• work queue behavior is typically modeled using Little’s Law
Number of tasks in a system equals the arrival rate times the
amount of time it takes to clear an item
Task is submitted every 500ms, or 2 per second
Number of tasks = 2/sec * 2.8 seconds
= 5.6 tasks
132. Components of Latency
Latency is time from stimulus to result
• internally, latency consists of active and dead time
Reducing dead time assumes you
• can find it and are able to fill in with useful work
133. From Previous Example
if there is available hardware capacity then
make the pool bigger
else
add capacity
or tune to reduce strength of the dependency
134. ForkJoinPool Observability
• In an application where you have many parallel stream
operations all running concurrently performance will be affected
by the size of the common thread pool
• too small can starve threads from needed resources
• too big can cause threads to thrash on contended resources
ForkJoinPool comes with no visibility
• no metrics to help us tune
• instrument ForkJoinTask.invoke()
• gather measures that can be feed into Little’s Law
135. • Collect
• service times
• time submitted to time returned
• inter-arrival times
Instrumenting ForkJoinPool
public final V invoke() {
ForkJoinPool.common.getMonitor().submitTask(this);
int s;
if ((s = doInvoke() & DONE_MASK) != NORMAL) reportException(s);
ForkJoinPool.common.getMonitor().retireTask(this);
return getRawResult();
}
136. Performance
Submit log parsing to our own ForkJoinPool
new ForkJoinPool(16).submit(() -> ……… ).get()
new ForkJoinPool(8).submit(() -> ……… ).get()
new ForkJoinPool(4).submit(() -> ……… ).get()
139. Performance mostly doesn’t matter
But if you must…
• sequential streams normally beat iterative solutions
• parallel streams can utilize all cores, providing
- the data is efficiently splittable
- the intermediate operations are sufficiently expensive and are
CPU-bound
- there isn’t contention for the processors
Conclusions
#java8fairies