The seminar presentation discusses dynamic cache management techniques. It begins with an introduction to cache memory and how data is transferred from memory to the CPU. Then, it describes different mapping functions and dynamic cache management. The rest of the presentation focuses on different dynamic techniques for managing the L0 cache, including the simple method, static method, dynamic confidence estimation method, and dynamic distance estimation method. It provides details on how each technique determines whether to access data from the L0 cache or main memory based on factors like branch prediction accuracy.
1. A
Seminar Presentation On
“Dynamic Cache Management
Technique”
Presented By:
Ajay Singh Lamba
(IT , Final Year)
2. Content
Introduction to cache memory
How stored data is transferred to the CPU
Mapping functions
Dynamic Cache Management
Dynamic Techniques For L0-cache
Management
3. Introduction to cache memory
A cache, in computer terms, is a place to store
information that's faster than the place where the
information is usually stored
Cache memory is fast memory that is used to hold the
most recently accessed data
Only frequently accessed data will stay in cache, which
allows the CPU to access it more quickly
it is placed in the processor chip, which allows it to 'talk'
with the processor direct at a much higher speed than
standard RAM.
5. Mapping functions
Since M>>C, how are blocks mapped to specific lines
in cache.
1. Direct mapping
2. Associative mapping
3. Set associative mapping
6. Dynamic Cache Management
It’s resizing strategy of the cache memory
Dynamic caching allows for dynamic resizing both
across and within applications execution.
The basic idea is that only the most frequently executed
portion of the code should be stored in the L0-cache
9. SIMPLE METHOD
If a branch predictor is mispredicted, the machine will
access the I-cache to fetch the instructions.
If a branch is predicted correctly, the machine will access
the L0-cache.
In a misprediction , the machine will start fetching the
instructions from the correct address by accessing the I-
cache.
10. STATIC METHOD
If a ‘high confidence’ branch was predicted incorrectly,
the I-cache is accessed for the subsequent basic blocks.
If more than n low confidence branches have been
decoded in a row, the I-cache is accessed. Therefore the
L0-cache will be bypassed when either of the two
conditions are satisfied.
In any other case the machine will access the L0-cache.
11. DYNAMIC CONFIDENCE ESTIMATION
METHOD
It is a dynamic version of the static method.
The confidence of the I-cache is accessed if
1. A high confidence branch is mispredicted.
2. More than n successive ‘low confidence’ branches
are encountered.
it is more accurate in characterizing a branch and,
then, regulating the access of the L0-cache.
12. RESTRICTIVE DYNAMIC CONFIDENCE
ESTIMATION METHOD
Restrictive dynamic scheme is a more selective scheme
in which only the really important basic blocks would be
selected for the L0-cache.
The L0-cache is accessed only if a ‘high confidence’
branch is predicted correctly. The I-cache is accessed in
any other case.
This method selects some of the most frequently
executed basic blocks, yet it misses some others.
13. Dynamic Distance Estimation Method
All n branches after a mispredicted branch are tagged as
‘low confidence’ otherwise as ‘high confidence’.
The basic blocks after a ‘low confidence’ branch are
fetched from the L0-cache.
The net effect is that a branch misprediction causes a
series of fetches from the I-cache.
A counter is used to measure the distance of a branch
from the previous mispredicted branch.