8. Why Caches can be a pain?
...Because we forget that...
● a Cache hosts our DATA
● a cache IS NOT JUST AN ASSOCIATION ARRAY
● a cache is NOT a BUFFER
● a cache is NOT a POOL
● Business application ! = Twitter / Facebook / ... [Michael Plöd]
Caching is not what you think it is!
https://medium.com/@mauridb/caching-is-not-what-you-think-it-is-5104f8891b51
<< ...caching should be done to decrease costs needed to increase performance
and scalability and NOT to solve performance and scalability problems… >>
[Davide Mauri]
9. Why Caches can be a pain?
...Because we forget to set well defined goals and trade-off between:
– Offloading :
decrease the load of a system with limited and/or expensive
resources
– Performance :
decrease network/cpu usage
– Scale-out :
horizontal growth of systems having data locality and working-sets
ready
– Resilience
service resilience with fallback, default values, reuse of errors
10. Why Caches can be a pain?
...Because we forget the important things...
12. What can we do?
In order not to suffer with Caching we should:
● Decide on the type of cache to use
● Decide on an adoption path (How can we introduce
Caches in our projects?)
● Know our data
● Decide on the trade-off between reads and writes
● Define trade-offs for Resilience and Security
14. Different Kind of Caches
● Local/Internal
● In-Process
● Near Cache
Processo
Cache
Processo
Near
Cache
Cache
Server
Cache
Server
Cache
Server
15. Different Kind of Caches
● Remote/External
● Replicated
● Distributed(Partitioned)
Processo
Cache Cache Cache
16. Different Kind of Caches
● In-Process : for reads and writes, small/medium size, it does not
scale because it is limited to the process
● Near-Cache : better for reads, small/medium size, can scale in the
relationship with the cluster (of which it is an local
extension/expression)
● Replicated : data consistency for reads, small size, limited
scalability
● Partitioned : for reads and writes, different sizes and ability to
scale with fault tolerance
17. Different Kind of Caches : DEV
● Small Cache Read-Only/Timed (In-Process)
● Memoization (In-Process/Near Cache)
● Cache (In-Process/Distributed-Partitioned)
● User Session/Working Set (In-Process/Distributed-
Partitioned + … or NoSql)
● Distributed Memory (IMDG)
18. Different Kind of Caches : Problems
● Cache Stampede/Thunderig Herd ( concurrent calls on a
specific key not already there )
● Cache Fault Tollerance ( error handling for the Caching
subsystem, hierarchical caches, network error
management, ...)
● Cache Security (privacy and security policies, regulations
conformance, technical solutions)
20. Adopt a Cache
Can follow two paths depending on
whether:
● Cache as First-Citizen in the
Software Architecture (Caching
Application Profiles)
● Cache as an evolution of a pre-
existing system
Added value is in the creation of a data
models, to be changed over time, born
from the evidence from the first phase. [Michael Plöd]
(1) (2) (3)
21. Adopt a Cache
Cache observability, especially if distributed:
● Hit : the value sought is available
● Miss : the value sought is not available
● Cold/Hot : cache is empty/full
● Warm-Up : populating cache
● Hit Ratio : Hit/(Hits + Miss)
● Hit Rate : Hit/seconds
● Items Size : number of elements in cache
● Conc. Request/s : number of concurrent requests/s
● ...many others!
22. Adopt a Cache
Having Operations support: collaboration/synergy is important
for network, metrics, deployment and emergency management
aspects
Have an alternative Plan : prepare alternatives that allow the
system/service to be online in the event of widespread errors or
unavailability of the Caching System
Prepare a design where the Cache Provider is abstracted and
appropriately hidden in terms of implementation to avoid
unsolvable dependencies in the future!
24. Know the Data
What are the data to put in Cache?
● Most Used/Required
● Expensive to Calculate
● Expensive to Retrieve
● Common/Shareable Data
The best are: read-only, frequently used and/or
expensive to calculate
25. Know the Data
What characteristics of the data to choose?
● Data Type (Better NOT the DTO, NOT Business Object)
● Data Format (Textual? Binary? Custom?)
● Life Time of the Data (When It’s Stale/Fresh)
● Data Type volumes
● Serialization/Deserialization issues
● Data Affinity
● Data Compression (...if you really have to...)
26. Know the Data
What issues are related to the Data (areas):
● Cache Access
● Cache Eviction
● Cache Invalidation
● Data Search/Data Collections Management
● Definition of Unique Keys
● Cache Concurrency Support
● Storage (RAM, SSD, … )
● Security/Regulations
27. Know the Data : Eviction
Forgetting is difficult for a cache: we have to find the trade-off between the
usefulness of the data and the size of the cache!
Concepts born from the optimization of linear research (Self-Organizing List
https://en.wikipedia.org/wiki/Self-organizing_list )
● Move To the Front
● Transpose
● Counting
LRU
LFU
28. Know the Data : Eviction
The frameworks, in a best-effort perspective, essentially offer LRU, LFU and
the ability of creating customized policies.
● LRU: (recency) deletes the least recently used items.
● LFU: (frequency) based on access frequency, eliminates less frequently used
Studies rise in the direction of Adaptive Systems using AI or statistical
processing (on the history of data); can be offered better results in the
compromise between memory, competition, speed!
● https://arxiv.org/pdf/1512.00727.pdf
● https://www.cs.bgu.ac.il/~tanm201/wiki.files/S1P2%20Adaptive%20Software
%20Cache%20Management.pdf
29. Know the Data : Eviction
When LRU, LFU are not enough which element can improve the
situation?
Time!
Applying timing policies or time windows for the aging of
data or which restrict the validity of data, helps to have a better
degree of adaptability ... but there is more!
31. Know the Data : Problems
● Cache Trashing
the pattern of data usage is such that the cache is useless
● Cold Cache
an empty cache takes time to be useful!
● Cache Security
like any system there are privacy and security issues (what about
GDPR?):
● Data anonymization
● Cache Penetration
● Cache Avalanche
33. Access Patterns
Accessing or entering Data in a cache means also choosing
its role and the trade-off between reads and writes...
● Cache-Aside
● Cache-Through
● Write-Around
● Refresh-Ahead
● Write-Back ( Write-Behind)
34. Access Patterns
Cache-Aside : the application is responsible for reads and writes to storage
as well as to the cache that is collateral to storage
35. Access Patterns
Related to Cache-Aside are:
● Look-Aside : The value is first searched in the cache and then in the storage
● Demand-Fill : Implies that in the case of MISS not only is the value returned
from the storage but it will also be placed in the cache
Cache-Aside generally provides both the LOOK-ASIDE and the DEMAND-
FILL but it is not mandatory that both are present: in a Pub/Sub system, Cache
and Storage can be subscribers of the same Publisher but they materialize the
data for two different reasons.
36. Access Patterns
Cache-Through : Write-Through/Read-Through
The application treats the cache as if it were the main storage; reads /
writes take place through the cache and propagated synchronously on
the storage
37. Access Patterns
Write-Around : The application reads from the cache but for
writes this is avoided. When data is new then is written directly to
the storage: it’s in the case of reads that the cache is filled with
data. (Useful when there are many writes and few reads).
38. Access Patterns
Refresh-Ahead : The cache is updated, also by scheduling,
asynchronously, for the recently accessed elements, before these expire
39. Access Patterns
Write-Back ( Write-Behind) : The application writes on the
cache but the propagation on the storage takes place
asynchronously (generally there is a delay configured, it assumes a queue
system; trade-off between high throughput and problems on data consistency)
40. Access Patterns : DEV
● Cache Aside
● Cache Through
● Cache Selective Bypass
● Cache Massive Load
● Cache Full Cleaning (+ Warmer)
42. The Club Rules
1) Don’t speak about cache
2) Don’t speak about cache: if you do it, made it not at the expense of
your services
3) Define the price you are willing to pay
4) If you change the rules of the game you must be aware of it
5) Design in a simple way: start local, works on definable models
6) Measure, measure, measure: the cache gives you data and hints
7) Cache tuning takes time and changes over time
8) If you are in the Club ‘cause Microservices ... You have to fight!
44. Microservices & Caching
Microservices amplify the importance of Caching Systems; among
the characteristics that explain this increase, worth mention:
● Microservices have their own data and there are many
● Microservices need to communicate!
● Different microservices have different needs
● Caching becomes part of the Resilience policies
● Caching to support a different persistence vision
● Microservices involve a more complex and powerful infrastructure
46. Microservices
The Microservices, on the infrastructural perspective, highlighted the need for a
layer of mediation and coordination of communications, today defined as
Service-Mesh.
Among the patterns deriving from the use of the Service-Mesh vision, Sidecar
has relevance: a container to aid a given Microservice.
47. Microservices
Service Mesh define, for Caching Systems, new possible topologies:
1) In-Process Cache for Microservice
2) Remote Cache (partitioned) external to the Service-Mesh
3) Remote Cache (partitioned) with Cache Client inside the Service-Mesh
(Sidecar)
4) Remote Cache (partitioned) with Caching System inside the Service-Mesh
( using Operators/Agent/Sidecar)
48. Microservices
In these scenarios the concepts of Eventual Consistency and Idempotency are
strengthened. The importance of having Streaming Systems and CDC Systems
emerges in the collaboration with the Caching System for important aspects,
among which:
● The persistence of Save Point / Critical Operations
● Alternative to 2PC Transactions
● The Data Propagation to suitable Listener subsystems
https://debezium.io/blog/2018/12/05/automating-cache-invalidation-with-change-data-capture/
https://medium.com/trabe/cache-invalidation-using-mqtt-e3bd8f6c2cf5