SlideShare a Scribd company logo
1 of 49
Memory Sub-System
Memory Subsystem
• Memory Hierarchy
• Types of memory
• Memory organization
• Memory Hierarchy Design
• Cache
Memory Hierarchy
• Registers
– In CPU
• Internal or Main memory
– May include one or more levels of cache
– “RAM”
• External memory
– Backing store
Memory Hierarchy - Diagram
Internal Memory Types
Memory Type Category Erasure Write Mechanism Volatility
Random-access
memory (RAM)
Read-write memory Electrically, byte-level Electrically Volatile
Read-only
memory (ROM)
Read-only memory Not possible
Masks
Nonvolatile
Programmable
ROM (PROM)
Electrically
Erasable PROM
(EPROM)
Read-mostly memory
UV light, chip-level
Electrically Erasable
PROM (EEPROM)
Electrically, byte-level
Flash memory Electrically, block-level
External Memory Types
• HDD
– Magnetic Disk(s)
– SDD (Solid State Disk(s))
• Optical
– CD-ROM
– CD-Recordable (CD-R)
– CD-R/W
– DVD
• Magnetic Tape
Random Access Memory (RAM)
• Misnamed as all semiconductor memory is random
access
• Read/Write
• Volatile
• Temporary storage
• Static or dynamic
Types of RAM
• Dynamic RAM (DRAM) – are like leaky capacitors;
initially data is stored in the DRAM chip, charging its
memory cells to maximum values. The charge slowly leaks
out and eventually would go to low to represent valid data;
before this happens, a refresh circuitry reads the contents of
the DRAM and rewrites the data to its original locations,
thus restoring the memory cells to their maximum charges
• Static RAM (SRAM) – is more like a register; once the data
has been written, it will stay valid, it doesn’t have to be
refreshed. Static RAM is faster than DRAM, also more
expensive. Cache memory in PCs is constructed from
SRAM memory.
Dynamic RAM
• Bits stored as charge in capacitors
– Charges leak
– Need refreshing even when powered
• Simpler construction
• Smaller per bit
– Less expensive
• Need refresh circuits
• Slower
• Used for main memory in computing systems
• Essentially analogue
– Level of charge determines value
Dynamic RAM Structure
DRAM Operation
• Address line active when bit read or written
– Transistor switch closed (current flows)
• Write
– Voltage to bit line
• High for 1 low for 0
– Then signal address line
• Transfers charge to capacitor
• Read
– Address line selected
• transistor turns on
– Charge from capacitor fed via bit line to sense amplifier
• Compares with reference value to determine 0 or 1
– Capacitor charge must be restored
DRAM Refreshing
• Refresh circuit included on chip
• Disable chip
• Count through rows
• Read & Write back
• Takes time
• Slows down apparent performance
Static RAM
• Bits stored as on/off switches
• No charges to leak
• No refreshing needed when powered
• More complex construction
• Larger per bit
– More expensive
• Does not need refresh circuits
• Faster
– Cache
• Digital
– Uses flip-flops
Stating RAM Structure
Static RAM Operation
• Transistor arrangement gives stable logic state
• State 1
– C1 high, C2 low
– T1 T4 off, T2 T3on
• State 0
– C2 high, C1 low
– T2 T3 off, T1 T4on
• Address line transistors T5 T6 is switch
• Write – apply value to B & compliment to B
• Read – value is on line B
SRAM v DRAM
• Both volatile
– Power needed to preserve data
• Dynamic cell
– Simpler to build, smaller
– More dense
– Less expensive
– Needs refresh
– Larger memory units
• Static
– Faster
– Cache
Read Only Memory (ROM)
• Permanent storage
– Nonvolatile
• Microprogramming
• Library subroutines (code) and constant data
• Systems programs (BIOS for PC or entire
application + OS for certain embedded systems)
Types of ROM
• Written during manufacture
– Very expensive for small runs
• Programmable (once)
– PROM
– Needs special equipment to program
• Read “mostly”
– Erasable Programmable (EPROM)
• Erased by UV
– Electrically Erasable (EEPROM)
• Takes much longer to write than read
– Flash memory
• Erase whole memory electrically
Internal linear organization
• 8X2 ROM chip
• As the number of
locations increases,
the size of the
address decoder
needed, becomes
very large
• Multiple dimensions
of decoding can be
used to overcome
this problem
Internal two-dimensional organization
• High order address bits (A2A1) select one of the rows
• The low order address bit selects one of the two locations in
the row
Memory Subsystems Organization (1)
• Two or more memory chips can be combined to create
memory with more bits per location (two 8X2 chips can
create a 8X4 memory)
Memory Subsystems Organization (2)
• Two or more memory chips can be combined to create more
locations (two 8X2 chips can create 16X2 memory)
Memory Hierarchy Design (1)
• Since 1987, microprocessors performance improved 55% per year and 35% until 1987
• This picture shows the CPU performance against memory access time improvements over the
years
– Clearly there is a processor-memory performance gap that computer architects must take care of
Memory Hierarchy Design (2)
• It is a tradeoff between size, speed and cost and exploits the principle
of locality.
• Register
– Fastest memory element; but small storage; very expensive
• Cache
– Fast and small compared to main memory; acts as a buffer between the CPU
and main memory: it contains the most recent used memory locations (address
and contents are recorded here)
• Main memory is the RAM of the system
• Disk storage - HDD
Memory Hierarchy Design (3)
• Comparison between different types of memory
size:
speed:
$/Mbyte:
32 - 256 B
1-2 ns
Register Cache Memory
32KB - 4MB
2-4 ns
$20/MB
1000 MB
60 ns
$0.2/MB
200 GB
8 ms
$0.001/MB
larger, slower, cheaper
HDD
Memory Hierarchy Design (4)
• Design questions about any level of the memory
hierarchy:
– Where can a block be placed in the upper level?
• BLOCK PLACEMENT
– How is a block found if it is in the upper level?
• BLOCK IDENTIFICATION
– Which block should be replaced on a miss?
• BLOCK REPLACEMENT
– What happens on a write?
• WRITE STRATEGY
Cache (1)
• Is the first level of memory hierarchy encountered
once the address leaves the CPU
– Since the principle of locality applies, and taking
advantage of locality to improve performance is so
popular, the term cache is now applied whenever
buffering is employed to reuse commonly occurring
items
• We will study caches by trying to answer the four
questions for the first level of the memory hierarchy
Cache (2)
• Every address reference goes first to the cache;
– if the desired address is not here, then we have a cache miss;
• The contents are fetched from main memory into the indicated CPU register and the
content is also saved into the cache memory
– If the desired data is in the cache, then we have a cache hit
• The desired data is brought from the cache, at very high speed (low access time)
• Most software exhibits temporal locality of access, meaning that it is
likely that same address will be used again soon, and if so, the address
will be found in the cache
• Transfers between main memory and cache occur at granularity of
cache lines or cache blocks, around 32 or 64 bytes (rather than bytes
or processor words). Burst transfers of this kind receive hardware
support and exploit spatial locality of access to the cache (future
access are often to address near to the previous one)
Cache Organization
Cache/Main Memory Structure
Where can a block be placed in Cache? (1)
• Our cache has eight block frames and the main
memory has 32 blocks
Where can a block be placed in Cache? (2)
• Direct mapped Cache
– Each block has only one place where it can appear in the cache
– (Block Address) MOD (Number of blocks in cache)
• Fully associative Cache
– A block can be placed anywhere in the cache
• Set associative Cache
– A block can be placed in a restricted set of places into the cache
– A set is a group of blocks into the cache
– (Block Address) MOD (Number of sets in the cache)
• If there are n blocks in the cache, the placement is said to be n-way set
associative
How is a Block Found in the Cache?
• Caches have an address tag on each block frame that gives the block address. The
tag is checked against the address coming from CPU
– All tags are searched in parallel since speed is critical
– Valid bit is appended to every tag to say whether this entry contains valid addresses or
not
• Address fields:
– Block address
• Tag – compared against for a hit
• Index – selects the set
– Block offset – selects the desired data from the block
• Set associative cache
– Large index means large sets with few blocks per set
– With smaller index, the associativity increases
• Full associative cache – index field is not existing
Which Block should be Replaced on a Cache Miss?
• When a miss occurs, the cache controller must select a
block to be replaced with the desired data
– Benefit of direct mapping is that the hardware decision is much
simplified
• Two primary strategies for full and set associative caches
– Random – candidate blocks are randomly selected
• Some systems generate pseudo random block numbers, to get reproducible
behavior useful for debugging
– LRU (Least Recently Used) – to reduce the chance that
information that has been recently used will be needed again, the
block replaced is the least-recently used one.
• Accesses to blocks are recorded to be able to implement LRU
What Happens on a Write?
• Two basic options when writing to the cache:
– Writhe through – the information is written to both, the block in
the cache an the block in the lower-level memory
– Write back – the information is written only to the cache
• The modified block of cache is written back into the lower-level memory
only when it is replaced
• To reduce the frequency of writing back blocks on
replacement, an implementation feature called dirty bit is
commonly used.
– This bit indicates whether a block is dirty (has been modified since
loaded) or clean (not modified). If clean, no write back is involved
Alpha Processors Cache Example
1 – the address comes from the CPU, being divided into 29
bit block address and 5 bit offset. The block address is
further divided into 21 bit tag and 8 bit index
2 – the cache index selects the tag to be tested to see if the
desired block is in the cache. The size of the index depends
on the cache size, block size and the set associativity
3 – after reading the tag from the cache, it is compared with
the tag from the address from the CPU. The valid bit must be
set, otherwise, the result of comparison is ignored.
4 – assuming the tag does match, the final step is to
signal the CPU to load the data from the cache.
Detailed Direct Mapping Example
• Cache of 64kByte
• Cache block of 4 bytes
– i.e. cache is 16k (214
) lines of 4 bytes
• 16MBytes main memory
– 24 bit address (224
=16M)
• Address is in two parts
– Least Significant w bits identify unique word
– Most Significant s bits specify one memory block
– The MSBs are split into a cache line field r and a tag of s-
r (most significant)
Direct Mapping Example - Address Structure
Tag s-r Line (Index) r Word w
8 14 2
• 24 bit address
– 2 bit word identifier (4 byte block)
– 22 bit block identifier
• 8 bit tag (=22-14)
• 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag
Direct Mapping Cache Organization
Mapping function
i = j mod m
Direct
Mapping
Example
Detailed Fully Associative Mapping Example
• Cache of 64kByte
– Cache block of 4 bytes
– i.e. cache is 16k (214
) lines of 4 bytes
• 16MBytes main memory
– 24 bit address (224
=16M)
• A main memory block can load into any line of cache
• Memory address is interpreted as tag and word
– Tag uniquely identifies block of memory
– Every line’s tag is examined for a match
• Cache searching gets expensive
Tag 22 bit
Word
2 bit
Fully Associative Mapping Example - Address Structure
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to check for hit
• Least significant 2 bits of address identify which word is
required from 32 bit data block
• e.g.
– Address Tag Data Cache line
– FFFFFC FFFFFC 0x24682468 3FFF
Fully Associative Cache
Organization
Associative
Mapping
Example
Detailed Set Associative Mapping Example
• Cache of 64kByte
– Cache block of 4 bytes
– i.e. cache is 16k (214
) lines of 4 bytes
• 16MBytes main memory
– 24 bit address (224
=16M)
• Cache is divided into a number of sets (v)
– Each set contains a number of lines (k)
• A given block maps to any line in a given set
– e.g. Block B can be in any line of set i
• Mapping function
– i = j mod v (where total lines in the cache m = v * k)
• J – main memory block
• I – cache set number
• e.g. 2 lines per set
– 2 way associative mapping (k = 2)
– A given block can be in one of 2 lines in only one set
Example Set Associative Mapping - Address Structure
• Use set field to determine cache set to look in
• Compare tag field to see if we have a hit
• e.g
– Address Tag Data Set
– 1FF 7FFC 1FF 12345678 1FFF
– 001 7FFC 001 11223344 1FFF
Tag 9 bit Set (Index) 13 bit
Word
2 bit
K-Way Set Associative Cache
Organization
Two Way Set Associative Mapping
Example
References
• “Computer Architecture – A Quantitative
Approach”, John L Hennessy & David A Patterson,
ISBN 1-55860-329-8
• “Computer Systems Organization & Architecture”,
John D. Carpinelli, ISBN: 0-201-61253-4
• “Computer Organization and Architecture”, William
Stallings, 8th
Edition

More Related Content

What's hot

Static and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesStatic and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesAbhilash Nair
 
Pin Description Of Intel 80386 DX Microprocessor
Pin Description Of Intel 80386 DX MicroprocessorPin Description Of Intel 80386 DX Microprocessor
Pin Description Of Intel 80386 DX MicroprocessorRaunaq Sahni
 
Memory Organization
Memory OrganizationMemory Organization
Memory OrganizationStella526835
 
Memory organization
Memory organizationMemory organization
Memory organizationAL- AMIN
 
8085 microproceesor ppt
8085 microproceesor ppt8085 microproceesor ppt
8085 microproceesor pptRJ Aniket
 
Cache memory ppt
Cache memory ppt  Cache memory ppt
Cache memory ppt Arpita Naik
 
Memory Organisation in embedded systems
Memory Organisation in embedded systemsMemory Organisation in embedded systems
Memory Organisation in embedded systemsUthraSowrirajan1
 
Unit II Arm7 Thumb Instruction
Unit II Arm7 Thumb InstructionUnit II Arm7 Thumb Instruction
Unit II Arm7 Thumb InstructionDr. Pankaj Zope
 
Virtual memory
Virtual memoryVirtual memory
Virtual memoryAnuj Modi
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)Sandesh Jonchhe
 
Modes of 80386
Modes of 80386Modes of 80386
Modes of 80386aviban
 
Microprogram Control
Microprogram Control Microprogram Control
Microprogram Control Anuj Modi
 

What's hot (20)

Static and Dynamic Read/Write memories
Static and Dynamic Read/Write memoriesStatic and Dynamic Read/Write memories
Static and Dynamic Read/Write memories
 
Pin Description Of Intel 80386 DX Microprocessor
Pin Description Of Intel 80386 DX MicroprocessorPin Description Of Intel 80386 DX Microprocessor
Pin Description Of Intel 80386 DX Microprocessor
 
Pin diagram 8085
Pin diagram 8085 Pin diagram 8085
Pin diagram 8085
 
80386
8038680386
80386
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
Memory organization
Memory organizationMemory organization
Memory organization
 
8085 microproceesor ppt
8085 microproceesor ppt8085 microproceesor ppt
8085 microproceesor ppt
 
Cache memory ppt
Cache memory ppt  Cache memory ppt
Cache memory ppt
 
80386 Architecture
80386 Architecture80386 Architecture
80386 Architecture
 
Memory Organisation in embedded systems
Memory Organisation in embedded systemsMemory Organisation in embedded systems
Memory Organisation in embedded systems
 
Unit II Arm7 Thumb Instruction
Unit II Arm7 Thumb InstructionUnit II Arm7 Thumb Instruction
Unit II Arm7 Thumb Instruction
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Memory organization (Computer architecture)
Memory organization (Computer architecture)Memory organization (Computer architecture)
Memory organization (Computer architecture)
 
Pentium processor
Pentium processorPentium processor
Pentium processor
 
Dram and its types
Dram and its typesDram and its types
Dram and its types
 
Modes of 80386
Modes of 80386Modes of 80386
Modes of 80386
 
Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
Semiconductor memory
Semiconductor memorySemiconductor memory
Semiconductor memory
 
Cache memory
Cache memoryCache memory
Cache memory
 
Microprogram Control
Microprogram Control Microprogram Control
Microprogram Control
 

Viewers also liked (20)

Memory Organization
Memory OrganizationMemory Organization
Memory Organization
 
RAM and ROM Memory Overview
RAM and ROM Memory OverviewRAM and ROM Memory Overview
RAM and ROM Memory Overview
 
Coal 14 input output devices in Assembly Programming
Coal 14 input output devices in Assembly ProgrammingCoal 14 input output devices in Assembly Programming
Coal 14 input output devices in Assembly Programming
 
Bca 2nd sem-u-1.7 digital logic circuits, digital component memory unit
Bca 2nd sem-u-1.7 digital logic circuits, digital component memory unitBca 2nd sem-u-1.7 digital logic circuits, digital component memory unit
Bca 2nd sem-u-1.7 digital logic circuits, digital component memory unit
 
Semiconductor Memories
Semiconductor MemoriesSemiconductor Memories
Semiconductor Memories
 
Ch05 coa9e
Ch05 coa9eCh05 coa9e
Ch05 coa9e
 
02 Computer Evolution And Performance
02  Computer  Evolution And  Performance02  Computer  Evolution And  Performance
02 Computer Evolution And Performance
 
Rom
RomRom
Rom
 
COMPUTER MEMORY
COMPUTER MEMORYCOMPUTER MEMORY
COMPUTER MEMORY
 
Semiconductor memories
Semiconductor memoriesSemiconductor memories
Semiconductor memories
 
Computers7 Ch4 2
Computers7 Ch4 2Computers7 Ch4 2
Computers7 Ch4 2
 
Accessing I/O Devices
Accessing I/O DevicesAccessing I/O Devices
Accessing I/O Devices
 
05 Internal Memory
05  Internal  Memory05  Internal  Memory
05 Internal Memory
 
Memory Hierarchy Design, Basics, Cache Optimization, Address Translation
Memory Hierarchy Design, Basics, Cache Optimization, Address TranslationMemory Hierarchy Design, Basics, Cache Optimization, Address Translation
Memory Hierarchy Design, Basics, Cache Optimization, Address Translation
 
ROM
ROMROM
ROM
 
Caches microP
Caches microPCaches microP
Caches microP
 
ROM
ROMROM
ROM
 
Cache memory
Cache memoryCache memory
Cache memory
 
Secondary Storage
Secondary StorageSecondary Storage
Secondary Storage
 
RAM (Random Access Memory)
RAM (Random Access Memory)RAM (Random Access Memory)
RAM (Random Access Memory)
 

Similar to Ct213 memory subsystem

Memory Hierarchy PPT of Computer Organization
Memory Hierarchy PPT of Computer OrganizationMemory Hierarchy PPT of Computer Organization
Memory Hierarchy PPT of Computer Organization2022002857mbit
 
Cache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptCache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptrularofclash69
 
Computer Memory Hierarchy Computer Architecture
Computer Memory Hierarchy Computer ArchitectureComputer Memory Hierarchy Computer Architecture
Computer Memory Hierarchy Computer ArchitectureHaris456
 
cache memory introduction, level, function
cache memory introduction, level, functioncache memory introduction, level, function
cache memory introduction, level, functionTeddyIswahyudi1
 
Chapter 8 computer memory system overview
Chapter 8 computer memory system overviewChapter 8 computer memory system overview
Chapter 8 computer memory system overviewAhlamAli20
 
CPU Caches - Jamie Allen
CPU Caches - Jamie AllenCPU Caches - Jamie Allen
CPU Caches - Jamie Allenjaxconf
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxRuhul Amin
 
Computer System Architecture Lecture Note 8.1 primary Memory
Computer System Architecture Lecture Note 8.1 primary MemoryComputer System Architecture Lecture Note 8.1 primary Memory
Computer System Architecture Lecture Note 8.1 primary MemoryBudditha Hettige
 
cache cache memory memory cache memory.pptx
cache cache memory memory cache memory.pptxcache cache memory memory cache memory.pptx
cache cache memory memory cache memory.pptxsaimawarsi
 

Similar to Ct213 memory subsystem (20)

Memory (Computer Organization)
Memory (Computer Organization)Memory (Computer Organization)
Memory (Computer Organization)
 
Memory Hierarchy PPT of Computer Organization
Memory Hierarchy PPT of Computer OrganizationMemory Hierarchy PPT of Computer Organization
Memory Hierarchy PPT of Computer Organization
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
cache memory.ppt
cache memory.pptcache memory.ppt
cache memory.ppt
 
Cache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.pptCache Memory for Computer Architecture.ppt
Cache Memory for Computer Architecture.ppt
 
Computer Memory Hierarchy Computer Architecture
Computer Memory Hierarchy Computer ArchitectureComputer Memory Hierarchy Computer Architecture
Computer Memory Hierarchy Computer Architecture
 
cache memory
cache memorycache memory
cache memory
 
cache memory introduction, level, function
cache memory introduction, level, functioncache memory introduction, level, function
cache memory introduction, level, function
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
cache memory
 cache memory cache memory
cache memory
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
04 cache memory
04 cache memory04 cache memory
04 cache memory
 
Chapter 8 computer memory system overview
Chapter 8 computer memory system overviewChapter 8 computer memory system overview
Chapter 8 computer memory system overview
 
internal_memory
internal_memoryinternal_memory
internal_memory
 
Cpu Caches
Cpu CachesCpu Caches
Cpu Caches
 
CPU Caches - Jamie Allen
CPU Caches - Jamie AllenCPU Caches - Jamie Allen
CPU Caches - Jamie Allen
 
coa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptxcoa-Unit5-ppt1 (1).pptx
coa-Unit5-ppt1 (1).pptx
 
Computer System Architecture Lecture Note 8.1 primary Memory
Computer System Architecture Lecture Note 8.1 primary MemoryComputer System Architecture Lecture Note 8.1 primary Memory
Computer System Architecture Lecture Note 8.1 primary Memory
 
CAO-Unit-III.pptx
CAO-Unit-III.pptxCAO-Unit-III.pptx
CAO-Unit-III.pptx
 
cache cache memory memory cache memory.pptx
cache cache memory memory cache memory.pptxcache cache memory memory cache memory.pptx
cache cache memory memory cache memory.pptx
 

More from Sandeep Kamath

More from Sandeep Kamath (20)

Android persentation
Android persentationAndroid persentation
Android persentation
 
Biometrics
BiometricsBiometrics
Biometrics
 
Oracle architecture
Oracle architectureOracle architecture
Oracle architecture
 
Oracle advanced
Oracle advancedOracle advanced
Oracle advanced
 
Oracle
OracleOracle
Oracle
 
Technology
TechnologyTechnology
Technology
 
Ad and da convertor
Ad and da convertorAd and da convertor
Ad and da convertor
 
Introducttion to robotics and microcontrollers
Introducttion to robotics and microcontrollersIntroducttion to robotics and microcontrollers
Introducttion to robotics and microcontrollers
 
Rs 232
Rs 232Rs 232
Rs 232
 
Symbian os
Symbian osSymbian os
Symbian os
 
Microprocessor in washing machine
Microprocessor in washing machineMicroprocessor in washing machine
Microprocessor in washing machine
 
File hippo
File hippoFile hippo
File hippo
 
Processor
ProcessorProcessor
Processor
 
Microprocessor in washing machine
Microprocessor in washing machineMicroprocessor in washing machine
Microprocessor in washing machine
 
Microprocessor based software developnent
Microprocessor based software developnentMicroprocessor based software developnent
Microprocessor based software developnent
 
Microcessor aplication
Microcessor aplicationMicrocessor aplication
Microcessor aplication
 
Magnetic tape
Magnetic tapeMagnetic tape
Magnetic tape
 
Introducttion to robotics and microcontrollers
Introducttion to robotics and microcontrollersIntroducttion to robotics and microcontrollers
Introducttion to robotics and microcontrollers
 
Interfacing address
Interfacing addressInterfacing address
Interfacing address
 
Embedded systems
Embedded systemsEmbedded systems
Embedded systems
 

Recently uploaded

Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024The Digital Insurer
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusZilliz
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Zilliz
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 

Recently uploaded (20)

Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024AXA XL - Insurer Innovation Award Americas 2024
AXA XL - Insurer Innovation Award Americas 2024
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 

Ct213 memory subsystem

  • 2. Memory Subsystem • Memory Hierarchy • Types of memory • Memory organization • Memory Hierarchy Design • Cache
  • 3. Memory Hierarchy • Registers – In CPU • Internal or Main memory – May include one or more levels of cache – “RAM” • External memory – Backing store
  • 5. Internal Memory Types Memory Type Category Erasure Write Mechanism Volatility Random-access memory (RAM) Read-write memory Electrically, byte-level Electrically Volatile Read-only memory (ROM) Read-only memory Not possible Masks Nonvolatile Programmable ROM (PROM) Electrically Erasable PROM (EPROM) Read-mostly memory UV light, chip-level Electrically Erasable PROM (EEPROM) Electrically, byte-level Flash memory Electrically, block-level
  • 6. External Memory Types • HDD – Magnetic Disk(s) – SDD (Solid State Disk(s)) • Optical – CD-ROM – CD-Recordable (CD-R) – CD-R/W – DVD • Magnetic Tape
  • 7. Random Access Memory (RAM) • Misnamed as all semiconductor memory is random access • Read/Write • Volatile • Temporary storage • Static or dynamic
  • 8. Types of RAM • Dynamic RAM (DRAM) – are like leaky capacitors; initially data is stored in the DRAM chip, charging its memory cells to maximum values. The charge slowly leaks out and eventually would go to low to represent valid data; before this happens, a refresh circuitry reads the contents of the DRAM and rewrites the data to its original locations, thus restoring the memory cells to their maximum charges • Static RAM (SRAM) – is more like a register; once the data has been written, it will stay valid, it doesn’t have to be refreshed. Static RAM is faster than DRAM, also more expensive. Cache memory in PCs is constructed from SRAM memory.
  • 9. Dynamic RAM • Bits stored as charge in capacitors – Charges leak – Need refreshing even when powered • Simpler construction • Smaller per bit – Less expensive • Need refresh circuits • Slower • Used for main memory in computing systems • Essentially analogue – Level of charge determines value
  • 11. DRAM Operation • Address line active when bit read or written – Transistor switch closed (current flows) • Write – Voltage to bit line • High for 1 low for 0 – Then signal address line • Transfers charge to capacitor • Read – Address line selected • transistor turns on – Charge from capacitor fed via bit line to sense amplifier • Compares with reference value to determine 0 or 1 – Capacitor charge must be restored
  • 12. DRAM Refreshing • Refresh circuit included on chip • Disable chip • Count through rows • Read & Write back • Takes time • Slows down apparent performance
  • 13. Static RAM • Bits stored as on/off switches • No charges to leak • No refreshing needed when powered • More complex construction • Larger per bit – More expensive • Does not need refresh circuits • Faster – Cache • Digital – Uses flip-flops
  • 15. Static RAM Operation • Transistor arrangement gives stable logic state • State 1 – C1 high, C2 low – T1 T4 off, T2 T3on • State 0 – C2 high, C1 low – T2 T3 off, T1 T4on • Address line transistors T5 T6 is switch • Write – apply value to B & compliment to B • Read – value is on line B
  • 16. SRAM v DRAM • Both volatile – Power needed to preserve data • Dynamic cell – Simpler to build, smaller – More dense – Less expensive – Needs refresh – Larger memory units • Static – Faster – Cache
  • 17. Read Only Memory (ROM) • Permanent storage – Nonvolatile • Microprogramming • Library subroutines (code) and constant data • Systems programs (BIOS for PC or entire application + OS for certain embedded systems)
  • 18. Types of ROM • Written during manufacture – Very expensive for small runs • Programmable (once) – PROM – Needs special equipment to program • Read “mostly” – Erasable Programmable (EPROM) • Erased by UV – Electrically Erasable (EEPROM) • Takes much longer to write than read – Flash memory • Erase whole memory electrically
  • 19. Internal linear organization • 8X2 ROM chip • As the number of locations increases, the size of the address decoder needed, becomes very large • Multiple dimensions of decoding can be used to overcome this problem
  • 20. Internal two-dimensional organization • High order address bits (A2A1) select one of the rows • The low order address bit selects one of the two locations in the row
  • 21. Memory Subsystems Organization (1) • Two or more memory chips can be combined to create memory with more bits per location (two 8X2 chips can create a 8X4 memory)
  • 22. Memory Subsystems Organization (2) • Two or more memory chips can be combined to create more locations (two 8X2 chips can create 16X2 memory)
  • 23. Memory Hierarchy Design (1) • Since 1987, microprocessors performance improved 55% per year and 35% until 1987 • This picture shows the CPU performance against memory access time improvements over the years – Clearly there is a processor-memory performance gap that computer architects must take care of
  • 24. Memory Hierarchy Design (2) • It is a tradeoff between size, speed and cost and exploits the principle of locality. • Register – Fastest memory element; but small storage; very expensive • Cache – Fast and small compared to main memory; acts as a buffer between the CPU and main memory: it contains the most recent used memory locations (address and contents are recorded here) • Main memory is the RAM of the system • Disk storage - HDD
  • 25. Memory Hierarchy Design (3) • Comparison between different types of memory size: speed: $/Mbyte: 32 - 256 B 1-2 ns Register Cache Memory 32KB - 4MB 2-4 ns $20/MB 1000 MB 60 ns $0.2/MB 200 GB 8 ms $0.001/MB larger, slower, cheaper HDD
  • 26. Memory Hierarchy Design (4) • Design questions about any level of the memory hierarchy: – Where can a block be placed in the upper level? • BLOCK PLACEMENT – How is a block found if it is in the upper level? • BLOCK IDENTIFICATION – Which block should be replaced on a miss? • BLOCK REPLACEMENT – What happens on a write? • WRITE STRATEGY
  • 27. Cache (1) • Is the first level of memory hierarchy encountered once the address leaves the CPU – Since the principle of locality applies, and taking advantage of locality to improve performance is so popular, the term cache is now applied whenever buffering is employed to reuse commonly occurring items • We will study caches by trying to answer the four questions for the first level of the memory hierarchy
  • 28. Cache (2) • Every address reference goes first to the cache; – if the desired address is not here, then we have a cache miss; • The contents are fetched from main memory into the indicated CPU register and the content is also saved into the cache memory – If the desired data is in the cache, then we have a cache hit • The desired data is brought from the cache, at very high speed (low access time) • Most software exhibits temporal locality of access, meaning that it is likely that same address will be used again soon, and if so, the address will be found in the cache • Transfers between main memory and cache occur at granularity of cache lines or cache blocks, around 32 or 64 bytes (rather than bytes or processor words). Burst transfers of this kind receive hardware support and exploit spatial locality of access to the cache (future access are often to address near to the previous one)
  • 31. Where can a block be placed in Cache? (1) • Our cache has eight block frames and the main memory has 32 blocks
  • 32. Where can a block be placed in Cache? (2) • Direct mapped Cache – Each block has only one place where it can appear in the cache – (Block Address) MOD (Number of blocks in cache) • Fully associative Cache – A block can be placed anywhere in the cache • Set associative Cache – A block can be placed in a restricted set of places into the cache – A set is a group of blocks into the cache – (Block Address) MOD (Number of sets in the cache) • If there are n blocks in the cache, the placement is said to be n-way set associative
  • 33. How is a Block Found in the Cache? • Caches have an address tag on each block frame that gives the block address. The tag is checked against the address coming from CPU – All tags are searched in parallel since speed is critical – Valid bit is appended to every tag to say whether this entry contains valid addresses or not • Address fields: – Block address • Tag – compared against for a hit • Index – selects the set – Block offset – selects the desired data from the block • Set associative cache – Large index means large sets with few blocks per set – With smaller index, the associativity increases • Full associative cache – index field is not existing
  • 34. Which Block should be Replaced on a Cache Miss? • When a miss occurs, the cache controller must select a block to be replaced with the desired data – Benefit of direct mapping is that the hardware decision is much simplified • Two primary strategies for full and set associative caches – Random – candidate blocks are randomly selected • Some systems generate pseudo random block numbers, to get reproducible behavior useful for debugging – LRU (Least Recently Used) – to reduce the chance that information that has been recently used will be needed again, the block replaced is the least-recently used one. • Accesses to blocks are recorded to be able to implement LRU
  • 35. What Happens on a Write? • Two basic options when writing to the cache: – Writhe through – the information is written to both, the block in the cache an the block in the lower-level memory – Write back – the information is written only to the cache • The modified block of cache is written back into the lower-level memory only when it is replaced • To reduce the frequency of writing back blocks on replacement, an implementation feature called dirty bit is commonly used. – This bit indicates whether a block is dirty (has been modified since loaded) or clean (not modified). If clean, no write back is involved
  • 36. Alpha Processors Cache Example 1 – the address comes from the CPU, being divided into 29 bit block address and 5 bit offset. The block address is further divided into 21 bit tag and 8 bit index 2 – the cache index selects the tag to be tested to see if the desired block is in the cache. The size of the index depends on the cache size, block size and the set associativity 3 – after reading the tag from the cache, it is compared with the tag from the address from the CPU. The valid bit must be set, otherwise, the result of comparison is ignored. 4 – assuming the tag does match, the final step is to signal the CPU to load the data from the cache.
  • 37. Detailed Direct Mapping Example • Cache of 64kByte • Cache block of 4 bytes – i.e. cache is 16k (214 ) lines of 4 bytes • 16MBytes main memory – 24 bit address (224 =16M) • Address is in two parts – Least Significant w bits identify unique word – Most Significant s bits specify one memory block – The MSBs are split into a cache line field r and a tag of s- r (most significant)
  • 38. Direct Mapping Example - Address Structure Tag s-r Line (Index) r Word w 8 14 2 • 24 bit address – 2 bit word identifier (4 byte block) – 22 bit block identifier • 8 bit tag (=22-14) • 14 bit slot or line • No two blocks in the same line have the same Tag field • Check contents of cache by finding line and checking Tag
  • 39. Direct Mapping Cache Organization Mapping function i = j mod m
  • 41. Detailed Fully Associative Mapping Example • Cache of 64kByte – Cache block of 4 bytes – i.e. cache is 16k (214 ) lines of 4 bytes • 16MBytes main memory – 24 bit address (224 =16M) • A main memory block can load into any line of cache • Memory address is interpreted as tag and word – Tag uniquely identifies block of memory – Every line’s tag is examined for a match • Cache searching gets expensive
  • 42. Tag 22 bit Word 2 bit Fully Associative Mapping Example - Address Structure • 22 bit tag stored with each 32 bit block of data • Compare tag field with tag entry in cache to check for hit • Least significant 2 bits of address identify which word is required from 32 bit data block • e.g. – Address Tag Data Cache line – FFFFFC FFFFFC 0x24682468 3FFF
  • 45. Detailed Set Associative Mapping Example • Cache of 64kByte – Cache block of 4 bytes – i.e. cache is 16k (214 ) lines of 4 bytes • 16MBytes main memory – 24 bit address (224 =16M) • Cache is divided into a number of sets (v) – Each set contains a number of lines (k) • A given block maps to any line in a given set – e.g. Block B can be in any line of set i • Mapping function – i = j mod v (where total lines in the cache m = v * k) • J – main memory block • I – cache set number • e.g. 2 lines per set – 2 way associative mapping (k = 2) – A given block can be in one of 2 lines in only one set
  • 46. Example Set Associative Mapping - Address Structure • Use set field to determine cache set to look in • Compare tag field to see if we have a hit • e.g – Address Tag Data Set – 1FF 7FFC 1FF 12345678 1FFF – 001 7FFC 001 11223344 1FFF Tag 9 bit Set (Index) 13 bit Word 2 bit
  • 47. K-Way Set Associative Cache Organization
  • 48. Two Way Set Associative Mapping Example
  • 49. References • “Computer Architecture – A Quantitative Approach”, John L Hennessy & David A Patterson, ISBN 1-55860-329-8 • “Computer Systems Organization & Architecture”, John D. Carpinelli, ISBN: 0-201-61253-4 • “Computer Organization and Architecture”, William Stallings, 8th Edition

Editor's Notes

  1. Real caches contain hundreds of block frames and real memories contain millions of blocks. Those numbers are chosen for simplicity. Assume that there is nothing in the cache and the block address in question (address that is accessed by processor falls within the block address number 12 in the main memory), then we can have three types of caches (from a block placement point of view): Fully associative – where block 12 from the lower level memory can go into any of 8 block frames of the cache Direct mapped – where block 12 from the lower level memory can go only into block frame 4 (12 mod 8) Set associative – where block 12 from the lower level memory can go anywhere into set 0 (12 mod 4, if our memory has four sets). With two blocks per set, that means that the block 12 can go anywhere into block frame 0 or block frame 1 of the cache
  2. The comparison can be made on the full address, but there is no need because of the following: Checking the index would be redundant, since it was used to select the set to be checked. For instance, an address stored in set 0, must have 0 in the index field or it couldn’t have been stored in set 0 The offset is unnecessary in the comparison since the entire block is present or not in the cache, so all the block offsets should match.
  3. 8KB cache direct mapped with 32byte blocks. 1 – the address comes from the CPU, being divided into 29 bit block address and 5 bit offset. The block address is further divided into 21 bit tag and 8 bit index 2 – the cache index selects the tag to be tested to see if the desired block is in the cache. The size of the index depends on the cache size (8KB in our case), block size (32 byte blocks) and the set associativity (direct mapped = 1) 3 – after reading the tag from the cache, it is compared with the tag from the address from the CPU. The valid bit must be set, otherwise, the result of comparison is ignored. 4 – assuming the tag does match, the final step is to signal the CPU to load the data from the cache. The alpha processor is using a write through technique for writing. The first three steps are the same. If a match, then the processor will write in both places, cache and the write buffer. The write buffer is used to cache multiple writes, so the write process would be more efficient.
  4. I = cache line number M = Number of lines in the cache J = Main memory block number