SlideShare una empresa de Scribd logo
1 de 48
Descargar para leer sin conexión
Part III
Storage Management
Chapter 9: Virtual Memory
Observations

qThe entire program does not have to be in
 memory, because
  verror handling codes are not frequently used
  varrays, tables, large data structures are
    allocated memory more than necessary and
    many parts are not used at the same time
  vsome options and cases may be used rarely
qIf they are not needed, why must they be in
 memory?
Benefits
qProgram length is not restricted to real
 memory size. That is, virtual address size can
 be larger than physical address size.
qCan run more programs because those space
 originally allocated for the un-loaded parts can
 be used by other programs.
qSave load/swap I/O time because we do not
 have to load/swap a complete program.
Virtual Memory
qVirtual memory is the separation of user logical
 memory from physical memory.
qThis permits to have extremely large virtual
 memory, which makes programming large
 systems much easier to use.
qBecause pages can be shared, this further
 improves performance and save time.
qVirtual memory is commonly implemented
 with demand paging, demand segmentation or
 demand paging+segmentation.
Demand Paging
    virtual memory        page table           physical memory   page frame table
0                     0   i                0                     0
                      1 v       7
                      2 i                                        1   proc A, pg 4
1                                          1
                      3 i                                        2
2                     4 v       1          2
                                                                 3
3                     5   i                3                     4
                      6   i
4                                          4                     5
5                                          5                     6
                                                                 7   proc A, pg 1
6                                          6
                                                                 8
                                           7                     9
              process A                    8
                                           9

                          valid/invalid or
                          present/absent bit
Address Translation
qThe translation from a virtual address to a physical
 address is the same as what a paging system does.
qHowever, there is an additional check. If the page is
 not in physical memory (i.e., the valid bit is not set),
 a page fault (i.e., a trap) occurs.
qIf a page fault occurs, we need to do the following:
  vFind an unused page frame. If no such page
    frame exists, a victim must be found and evicted.
  vWrite the old page out and load the new page in.
  vUpdate both page tables.
  vResume the interrupted instruction.
Details of Handling a Page Fault

Trap to the OS                  // a context switch occurs
Make sure it is a page fault;
If the address is not a legal one then
    address error, return
Find an unused page frame // page replacement algorithm
Write the page back to disk // page out
Load the new page from disk // page in
Update both page tables         // two pages are involved!
Resume the execution of the interrupted instruction
Hardware Support
qPage Table Base Register, Page Table Length
 Register, and a Page Table.
qEach entry of the page table must have a
 valid/invalid bit. Valid means that that page is
 in physical memory. The address translation
 hardware must recognize this bit and generate
 a page fault if the valid bit is not set.
qSecondary Memory: use a disk.
qOther hardware components may be needed
 and will be discussed later in this chapter.
Too Many Memory Accesses?!
qEach address reference may cause at least two
 memory accesses, one for page table look up
 and the other for accessing the item. It may be
 worse! See below:

    A
               C
                   How many memory accesses are there?
   ADD A, B, C     May be more than eight!



           B
Performance Issue: 1/2
qLet p be the probability of a page fault, the page
 fault rate, 0 ≤ p ≤ 1.
qThe effective access time is
  (1-p)*memory access time + p*page fault time
qThe page fault rate p should be small, and
 memory access time is usually between 10 and 200
 nanoseconds.
qTo complete a page fault, three components are
 important:
  vServe the page-fault trap
  vRead in the page, a bottleneck
  vRestart the process
Performance Issue: 2/2
qSuppose memory access time is 100 nanoseconds,
 paging requires 25 milliseconds (software and
 hardware). Then, effective access time is
   (1-p)*100 + p*(25 milliseconds)
      = (1-p)*100 + p*25,000,000 nanoseconds
      = 100 + 24,999,900*p nanoseconds
qIf the page fault rate is 1/1000, the effective access
 time is 25,099 nanoseconds = 25 microseconds. It is
 250 times slower!
qIf we wish it is only 10% slower, effective access
 time is no more than 110 and p=0.0000004.
Three Important Issues in V.M.
qA page table can be very large. If an address
 has 32 bits and page size is 4K, then there are
 232/212=220=(210)2= 1M entries in a page table per
 process!
qVirtual to physical address translation must be
 fast. This is done with TLB.
qPage replacement. When a page fault occurs and
 there is no free page frame, a victim page must
 be selected. If the victim is not selected
 properly, system degradation may be high.
Page Table Size
                               virtual address
                   index 1    index 2       index 3       offset
page table           8           6             6             12
base register                                                      memory




Only level 1
page tables
                   level 1
will reside in   page table     level 2
physical memory.              page table       level 3
                                             page table
Other page tables will be
paged-in when they are        May be in virtual memory

referred to.
Page Replacement: 1/2
qThe following is a basic scheme
 vFind the desired page on the disk
 vFind a free page frame in physical memory
    § If there is a free page frame, use it
    § If there is no free page frame, use a page-
      replacement algorithm to find a victim page
    § Write this victim page back to disk and
      change the page table and page frame table
 vRead the desired page into the frame and
   update page table and page frame table
 vRestart the interrupted instruction
Page Replacement: 2/2
qIf there is no free page frame, two page transfers
 (i.e., page-in and page-out) are required.
qA modify bit may be added to a page table entry.
 The modify bit is set if that page has been modified
 (i.e., storing info into it). It is initialized to 0 when a
 page is brought into memory.
qThus, if a page is not modified (i.e., modify bit = 0),
 it does not have to be written back to disk.
qSome systems may also have a reference bit. When
 a page is referenced (i.e., reading or writing), its
 reference bit is set. It is initialized to 0 when a page
 is brought in.
qBoth bits are set by hardware automatically.
Page Replacement Algorithms
qWe shall discuss the following page replacement
 algorithms:
  vFirst-In-First-Out - FIFO
  vThe Least Recently Used – LRU
  vThe Optimal Algorithm
  vThe Second Chance Algorithm
  vThe Clock Algorithm
qThe fewer number of page faults an algorithm
 generates, the better the algorithm it is.
qPage replacement algorithms work on page
 numbers. A string of such page numbers is
 referred to as a page reference string.
The FIFO Algorithm
   qThe FIFO algorithm always selects the “oldest”
    page to be the victim.
              0        1       2       3       0       1       4       0       1       2       3       4
3 frames           0       0       0       3       3       3       4       4       4       4       4       4
                           1       1       1       0       0       0       0       0       2       2       2
                                   2       2       2       1       1       1       1       1       3       3
           page fault=9                miss ratio=9/12=75% hit ratio = 25%
               0       1       2       3       0       1       4       0       1       2       3       4

4 frames           0       0       0       0       0       0       4       4       4       4       3       3
                           1       1       1       1       1       1       0       0       0       0       4
                                   2       2       2       2       2       2       1       1       1       1
                                           3       3       3       3       3       3       2       2       2
     page fault=10                 miss ratio=10/12=83.3% hit ratio = 16.7%
Belady Anomaly
qIntuitively, increasing the number of page frames
 should reduce page faults.
qHowever, some page replacement algorithms do not
 satisfy this “intuition.” The FIFO algorithm is an
 example.
qBelady Anomaly: Page faults increase as the
 number of page frames increases.
qFIFO was used in DEC VAX-78xx series because it
 is easy to implement: append the new page to the
 tail and select the head to be a victim!
The LRU Algorithm: 1/2
 qThe LRU algorithm always selects the page that
  has not been used for the longest period of time.
           0       1       2        3       0       1       4       0       1       2       3       4
3 frames       0       0       0        3       3       3       4       4       4       2       2       2
                       1       1        1       0       0       0       0       0       0       3       3
                               2        2       2       1       1       1       1       1       1       4
     page fault=10             miss ratio=10/12=83.3% hit ratio = 16.7%
           0       1       2        3       0       1       4       0       1       2       3       4

4 frames       0       0        0       0       0       0       0       0       0       0       0       4
                       1        1       1       1       1       1       1       1       1       1       1
                                2       2       2       2       4       4       4       4       3       3
                                        3       3       3       3       3       3       2       2       2
     page fault=8              miss ratio=8/12=66.7% hit ratio = 33.3%
The LRU Algorithm: 2/2
q The memory content of 3-frames is a subset of the memory
  content of 4-frames. This is the inclusion property. With
  this property, Belady anomaly never occurs.

          0       1       2       3       0       1       4       0       1       2       3       4
              0       0       0       3       3       3       4       4       4       2       2       2
                      1       1       1       0       0       0       0       0       0       3       3
                              2       2       2       1       1       1       1       1       1       4

          0       1       2       3       0       1       4       0       1       2       3       4
  extra       0       0       0       0       0       0       0       0       0       0       0       4
                      1       1       1       1       1       1       1       1       1       1       1
                              2       2       2       2       4       4       4       4       3       3
                                      3       3       3       3       3       3       2       2       2
The Optimal Algorithm: 1/2
 qThe optimal algorithm always selects the page that
  will not be used for the longest period of time.
           0       1       2        3       0       1       4       0       1       2       3       4
3 frames       0       0       0        0       0       0       0       0       0       2       2       2
                       1       1        1       1       1       1       1       1       1       3       3
                               2        3       3       3       4       4       4       4       4       4
     page fault=7          miss ratio=7/12=58.3% hit ratio = 41.7%
           0       1       2        3       0       1       4       0       1       2       3       4

4 frames       0       0        0       0       0       0       0       0       0       0       3       3
                       1        1       1       1       1       1       1       1       1       1       1
                                2       2       2       2       2       2       2       2       2       2
                                        3       3       3       4       4       4       4       4       4
     page fault=6              miss ratio=6/12=50% hit ratio = 50%
The Optimal Algorithm: 2/2
qThe optimal algorithm always delivers the fewest
 page faults, if it can be implemented. It also satisfies
 the inclusion property (i.e., no Belady anomaly).
         0       1       2       3       0       1       4       0       1       2       3       4
             0       0       0       0       0       0       0       0       0       2       2       2
                     1       1       1       1       1       1       1       1       1       3       3
                             2       3       3       3       4       4       4       4       4       4

         0       1       2       3       0       1       4       0       1       2       3       4
 extra       0       0       0       0       0       0       0       0       0       0       3       3
                     1       1       1       1       1       1       1       1       1       1       1
                             2       2       2       2       2       2       2       2       2       2
                                     3       3       3       4       4       4       4       4       4
LRU Approximation Algorithms
qFIFO has Belady anomaly, the Optimal algorithm
 requires the knowledge in the future, and the LRU
 algorithm requires accurate info of the past.
qThe optimal and LRU algorithms are difficult to
 implement, especially the optimal algorithm.
 Thus, LRU approximation algorithms are needed.
 We will discuss three:
  vThe Second-Chance Algorithm
  vThe Clock Algorithm
  vThe Enhanced Second-Chance Algorithm
The Second-Chance Algorithm: 1/3
 qThe second chance algorithm is basically a FIFO
  algorithm. It uses the reference bit of each page.
 qWhen a page frame is needed, check the oldest one:
   vIf its reference bit is 0, take this one
   vOtherwise, clear the reference bit, move it to the
     tail, and (perhaps) set the current time. This
     gives it a second chance.
 qRepeat this procedure until a 0 reference bit page is
  found. Do page-out and page-in if necessary, and
  move it to the tail.
 qProblem: Page frames are moved too frequently.
The Second-Chance Algorithm: 2/3
new page = X
The Second-Chance Algorithm: 3/3
new page = X
The Clock Algorithm: 1/2
qIf the second chance algorithm is implemented
 with a circular list, we have the clock algorithm.
qWe need a “next” pointer.
qWhen a page frame is needed, we examine the
 page under the “next” pointer:
  vIf its reference bit is 0, take it
  vOtherwise, clear the reference bit and advance
    the “next” pointer.
qRepeat this until a 0 reference bit frame is found.
qDo page-in and page-out, if necessary
The Clock Algorithm: 2/2
new page = X
Enhanced Second-Chance Algorithm: 1/5

qWe have four page lists based on their reference-
 modify bits (r,c):
 vQ00 - pages were not referenced and not
   modified. The best candidates!
 vQ01 - pages were changed but not recently
   referenced. Need a page-out.
 vQ10 - pages were recently used but clean.
 vQ11 - pages were recently used and modified.
   Need a page-out.
Enhanced Second-Chance Algorithm: 2/5

qWe still need a “next” pointer.
qWhen a page frame is needed:
 vDoes the “next” frame has 00 combination? If
  yes, victim is found. Otherwise, reset the
  reference bit and move this page to the
  corresponding list.
 vIf Q00 becomes empty, check Q01. If there is a
  frame with 01 combination, it is the victim.
  Otherwise, reset the reference bit and move the
  frame to the corresponding list.
 vIf Q01 becomes empty, move Q10 to Q00 and
  Q11 to Q01. Restart the scanning process.
Enhanced Second-Chance Algorithm: 3/5

    Q00         Q01        Q10        Q11
   1   11   5     10   8     11   10    11
   2   10   6     11   9     11   11    11
   3   11   7     10              12    11
   4   11


    Q00         Q01        Q10        Q11
            5     10   8     11   10    11
            6     11   9     11   11    11
            7     10   2     00   12    11

                                  1     01
                                  3     01
                                  4     01
Enhanced Second-Chance Algorithm: 4/5
   Q00        Q01            Q10            Q11
          5         10   8         11   10     11
          6         11   9         11   11     11
          7         10   2         00   12     11

                                        1      01
                                        3      01
                                        4      01

   Q00        Q01            Q10         Q11
                         8         11   10     11
                         9         11   11     11
                         2         00   12     11
                         5         00   1      01
                         7         00   3      01
                                        4      01
                                        6      01
Q00        Q01    Q10         Q11
8     11   10    11
9     11   11    11
                      This algorithm was used
2     00   12    11
5
                      in IBM DOS/VS and
      00   1     01
7                     MacOS!
      00   3     01
           4     01
           6     01
    Q00        Q01    Q10         Q11
2     00   10    11           8         01
5     00   11    11           9         01
7     00   12    11

           1     01
           3     01
           4     01
           6     01
Other Important Issues
qGlobal vs. Local Allocation
qLocality of Reference
qThrashing
qThe Working Set Model
qThe Working Set Clock Algorithm
qPage-Fault Frequency Replacement Algorithm
Global vs. Local Replacement
qGlobal replacement allows a process to select a
 victim from the set of all page frames, even if the
 page frame is currently allocated to another process.
qLocal replacement requires that each process selects
 a victim from its own set of allocated frames.
qWith a global replacement, the number of frames
 allocated to a process may change over time. With a
 local replacement algorithm, it is likely a constant.
Global vs. Local: A Comparison
qWith a global replacement algorithm, a process cannot
 control its own page fault rate, because the behavior of
 a process depends on the behavior of other processes.
 The same process running on a different system may
 have a totally different behavior.
qWith a local replacement algorithm, the set of pages of
 a process in memory is affected by the paging behavior
 of that process only. A process does not have the
 opportunity of using other less used frames.
 Performance may be lower.
q With a global strategy, throughput is usually higher,
 and is commonly used.
Locality of Reference



              qDuring any phase
               of execution, the
               process references
               only a relatively
               small fraction of
               pages.
Thrashing
qA high paging activity is called thrashing. This
 means a process is spending more time paging
 than executing (i.e., low CPU utilzation).
qIf CPU utilization is too low, the medium-term
 scheduler is invoked to swap in one or more
 swapped-out processes or bring in one or more
 new jobs. The number of processes in memory
 is referred to as the degree of
 multiprogramming.
Degree of Multiprogramming: 1/3
qWe cannot increase the degree of multiprogramming
 arbitrarily as throughput will drop at certain point
 and thrashing occurs.
qTherefore, the medium-term scheduler must
 maintain the optimal degree of multiprogramming.
Degree of Multiprogramming: 2/3
1. Suppose we use a global strategy and the CPU
   utilization is low. The medium-term scheduler
   will add a new process.
2. Suppose this new process requires more pages. It
   starts to have more page faults and page frames
   of other processes will be taken by this process.
3. Other processes also need these page frames.
   Thus, they start to have more page faults.
4. Because pages must be paged- in and out, these
   processes must wait, and the number of processes
   in the ready queue drops. CPU utilization is
   lower.
Degree of Multiprogramming: 3/3
5. Consequently, the medium-term scheduler brings
   in more processes into memory. These new
   processes also need page frames to run, causing
   more page faults.
6. Thus, CPU utilization drops further, causing the
   medium-term scheduler to bring in even more
   processes.
7. If this cycle continues, the page fault rate
   increases dramatically, and the effective memory
   access time increases. Eventually, the system is
   paralyzed because the processes are spending
   almost all time to do paging!
The Working Set Model: 1/4
qThe working set of a process at virtual time t,
 written as W(t,θ), is the set of pages that were
 referenced in the interval (t- θ, t], where θ is the
 window size.
qθ = 3. The result is identical to that of LRU:


         0       1       2       3       0       1       4       0       1       2       3       4
             0       0       0       3       3       3       4       4       4       2       2       2
                     1       1       1       0       0       0       0       0       0       3       3
                             2       2       2       1       1       1       1       1       1       4

 page fault=10           miss ratio=10/12=83.3% hit ratio = 16.7%
The Working Set Model: 2/4
qHowever, the result of θ = 4 is different from that
 of LRU.
         0       1       2       3       0       1       4       0       1       2       3       4
             0       0       0       0       0       0       0       0       0       0       0       4
                     1       1       1       1       1       1       1       1       1       1       1
                             2       2       2       2       4       4       4       4       3       3
                                     3       3       3       3                       2       2       2

   page fault=8          miss ratio=8/12=66.7% hit ratio = 33.3%


                             only three pages here
The Working Set Model: 3/4
qThe Working Set Policy: Find a good θ, and keep
 W(t,θ) in memory for every t.
qWhat is the best value of θ? This is a system tuning
 issue. This value can change as needed from time to
 time.
The Working Set Model: 4/4
qUnfortunately, like LRU, the working set policy
 cannot be implemented directly, and an
 approximation is necessary.
qA commonly used algorithm is the Working Set
 Clock algorithm, WSClock. This is a good and
 efficient approximation.
The WSClock Algorithm
The Page-Fault Frequency Algorithm: 1/2

qSince thrashing is due to high page-fault rate, we
 can control thrashing by controlling page-fault
 rate.
qIf the page-fault rate of a process is too high, this
 process needs more page frames. On the other
 hand, if the page-fault rate is too low, this process
 may have too many page frames.
qTherefore, if we can always maintain the page-
 fault rate of a process to certain level, we control
 the number of page frames that process can have.
The Page-Fault Frequency Algorithm: 2/2
qWe establish an upper bound and a lower bound,
 and monitor the page-fault rate periodically.
qIf the rate is higher (resp., lower) than the upper
 (resp., lower) bound, a new (resp., existing) page is
 allocated to (resp., removed from) this process.

Más contenido relacionado

La actualidad más candente

Page replacement
Page replacementPage replacement
Page replacementsashi799
 
Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904marangburu42
 
Virtual memory and page replacement algorithm
Virtual memory and page replacement algorithmVirtual memory and page replacement algorithm
Virtual memory and page replacement algorithmMuhammad Mansoor Ul Haq
 
41 page replacement fifo
41 page replacement fifo41 page replacement fifo
41 page replacement fifomyrajendra
 
Virtual memory - Demand Paging
Virtual memory - Demand PagingVirtual memory - Demand Paging
Virtual memory - Demand Pagingjeyaperumal
 
Chosse a best algorithm for page replacement to reduce page fault and analysi...
Chosse a best algorithm for page replacement to reduce page fault and analysi...Chosse a best algorithm for page replacement to reduce page fault and analysi...
Chosse a best algorithm for page replacement to reduce page fault and analysi...MdAlAmin187
 
42 lru optimal
42 lru optimal42 lru optimal
42 lru optimalmyrajendra
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithmsPiyush Rochwani
 
Set model and page fault.44
Set model and page fault.44Set model and page fault.44
Set model and page fault.44myrajendra
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithmsMazin Alwaaly
 
Transparent Hugepages in RHEL 6
Transparent Hugepages in RHEL 6 Transparent Hugepages in RHEL 6
Transparent Hugepages in RHEL 6 Raghu Udiyar
 
The life and times
The life and timesThe life and times
The life and timesAbeer Naskar
 

La actualidad más candente (19)

OSCh10
OSCh10OSCh10
OSCh10
 
Page replacement
Page replacementPage replacement
Page replacement
 
Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904Virtualmemorypre final-formatting-161019022904
Virtualmemorypre final-formatting-161019022904
 
Operating System
Operating SystemOperating System
Operating System
 
Virtual memory and page replacement algorithm
Virtual memory and page replacement algorithmVirtual memory and page replacement algorithm
Virtual memory and page replacement algorithm
 
41 page replacement fifo
41 page replacement fifo41 page replacement fifo
41 page replacement fifo
 
Mem mgt
Mem mgtMem mgt
Mem mgt
 
Virtual memory - Demand Paging
Virtual memory - Demand PagingVirtual memory - Demand Paging
Virtual memory - Demand Paging
 
Chosse a best algorithm for page replacement to reduce page fault and analysi...
Chosse a best algorithm for page replacement to reduce page fault and analysi...Chosse a best algorithm for page replacement to reduce page fault and analysi...
Chosse a best algorithm for page replacement to reduce page fault and analysi...
 
O ssvv82014
O ssvv82014O ssvv82014
O ssvv82014
 
Virtual memory ,Allocaton of frame & Trashing
Virtual memory  ,Allocaton of frame & TrashingVirtual memory  ,Allocaton of frame & Trashing
Virtual memory ,Allocaton of frame & Trashing
 
42 lru optimal
42 lru optimal42 lru optimal
42 lru optimal
 
Page replacement algorithms
Page replacement algorithmsPage replacement algorithms
Page replacement algorithms
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Set model and page fault.44
Set model and page fault.44Set model and page fault.44
Set model and page fault.44
 
Ch9
Ch9Ch9
Ch9
 
Computer architecture page replacement algorithms
Computer architecture page replacement algorithmsComputer architecture page replacement algorithms
Computer architecture page replacement algorithms
 
Transparent Hugepages in RHEL 6
Transparent Hugepages in RHEL 6 Transparent Hugepages in RHEL 6
Transparent Hugepages in RHEL 6
 
The life and times
The life and timesThe life and times
The life and times
 

Destacado (20)

Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
Virtual memory ppt
Virtual memory pptVirtual memory ppt
Virtual memory ppt
 
Chapter 9 - Virtual Memory
Chapter 9 - Virtual MemoryChapter 9 - Virtual Memory
Chapter 9 - Virtual Memory
 
08 operating system support
08 operating system support08 operating system support
08 operating system support
 
Virtual Memory and Paging
Virtual Memory and PagingVirtual Memory and Paging
Virtual Memory and Paging
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 
VIRTUAL MEMORY
VIRTUAL MEMORYVIRTUAL MEMORY
VIRTUAL MEMORY
 
06 external memory
06 external memory06 external memory
06 external memory
 
Operating Systems - Virtual Memory
Operating Systems - Virtual MemoryOperating Systems - Virtual Memory
Operating Systems - Virtual Memory
 
9 virtual memory management
9 virtual memory management9 virtual memory management
9 virtual memory management
 
Virtual Memory - Part1
Virtual Memory - Part1Virtual Memory - Part1
Virtual Memory - Part1
 
Virtual Memory - Part2
Virtual Memory - Part2Virtual Memory - Part2
Virtual Memory - Part2
 
virtual memory
virtual memoryvirtual memory
virtual memory
 
Ch10: Virtual Memory
Ch10: Virtual MemoryCh10: Virtual Memory
Ch10: Virtual Memory
 
Os8
Os8Os8
Os8
 
Virtual Memory (Making a Process)
Virtual Memory (Making a Process)Virtual Memory (Making a Process)
Virtual Memory (Making a Process)
 
Virtual Memory
Virtual MemoryVirtual Memory
Virtual Memory
 
Distributed Operating System_3
Distributed Operating System_3Distributed Operating System_3
Distributed Operating System_3
 
Virtual memory
Virtual memoryVirtual memory
Virtual memory
 

Similar a Virtual memory

Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating SystemRashmi Bhat
 
Galvin-operating System(Ch10)
Galvin-operating System(Ch10)Galvin-operating System(Ch10)
Galvin-operating System(Ch10)dsuyal1
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858marangburu42
 
Process mining chapter_06_advanced_process_discovery_techniques
Process mining chapter_06_advanced_process_discovery_techniquesProcess mining chapter_06_advanced_process_discovery_techniques
Process mining chapter_06_advanced_process_discovery_techniquesMuhammad Ajmal
 
Process Mining - Chapter 6 - Advanced Process Discovery_techniques
Process Mining - Chapter 6 - Advanced Process Discovery_techniquesProcess Mining - Chapter 6 - Advanced Process Discovery_techniques
Process Mining - Chapter 6 - Advanced Process Discovery_techniquesWil van der Aalst
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfHarika Pudugosula
 
381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)Rabie Masoud
 
page replacement.pptx
page replacement.pptxpage replacement.pptx
page replacement.pptxhomipeh
 
Understanding Autovacuum
Understanding AutovacuumUnderstanding Autovacuum
Understanding AutovacuumDan Robinson
 
virtual memory Operating system
virtual memory Operating system virtual memory Operating system
virtual memory Operating system Shaheen kousar
 
Unit 2chapter 2 memory mgmt complete
Unit 2chapter 2  memory mgmt completeUnit 2chapter 2  memory mgmt complete
Unit 2chapter 2 memory mgmt completeKalai Selvi
 
Pagereplacement algorithm(computional concept)
Pagereplacement algorithm(computional concept)Pagereplacement algorithm(computional concept)
Pagereplacement algorithm(computional concept)Siddhi Viradiya
 

Similar a Virtual memory (20)

Virtual memory management in Operating System
Virtual memory management in Operating SystemVirtual memory management in Operating System
Virtual memory management in Operating System
 
Homework solutionsch8
Homework solutionsch8Homework solutionsch8
Homework solutionsch8
 
Galvin-operating System(Ch10)
Galvin-operating System(Ch10)Galvin-operating System(Ch10)
Galvin-operating System(Ch10)
 
Virtual memory
Virtual memory Virtual memory
Virtual memory
 
Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858Virtualmemoryfinal 161019175858
Virtualmemoryfinal 161019175858
 
Process mining chapter_06_advanced_process_discovery_techniques
Process mining chapter_06_advanced_process_discovery_techniquesProcess mining chapter_06_advanced_process_discovery_techniques
Process mining chapter_06_advanced_process_discovery_techniques
 
Process Mining - Chapter 6 - Advanced Process Discovery_techniques
Process Mining - Chapter 6 - Advanced Process Discovery_techniquesProcess Mining - Chapter 6 - Advanced Process Discovery_techniques
Process Mining - Chapter 6 - Advanced Process Discovery_techniques
 
Virtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdfVirtual Memory Management Part - II.pdf
Virtual Memory Management Part - II.pdf
 
381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)381 ccs chapter7_updated(1)
381 ccs chapter7_updated(1)
 
운영체제론 Ch10
운영체제론 Ch10운영체제론 Ch10
운영체제론 Ch10
 
page replacement.pptx
page replacement.pptxpage replacement.pptx
page replacement.pptx
 
9.Virtual Memory
9.Virtual Memory9.Virtual Memory
9.Virtual Memory
 
Understanding Autovacuum
Understanding AutovacuumUnderstanding Autovacuum
Understanding Autovacuum
 
virtual memory Operating system
virtual memory Operating system virtual memory Operating system
virtual memory Operating system
 
Unit 2chapter 2 memory mgmt complete
Unit 2chapter 2  memory mgmt completeUnit 2chapter 2  memory mgmt complete
Unit 2chapter 2 memory mgmt complete
 
Pagereplacement algorithm(computional concept)
Pagereplacement algorithm(computional concept)Pagereplacement algorithm(computional concept)
Pagereplacement algorithm(computional concept)
 
Ch9 Virtual Memory
Ch9 Virtual MemoryCh9 Virtual Memory
Ch9 Virtual Memory
 
Page Replacement
Page ReplacementPage Replacement
Page Replacement
 
S09mid2sol
S09mid2solS09mid2sol
S09mid2sol
 
Memory
MemoryMemory
Memory
 

Más de Mohd Arif

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcpMohd Arif
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarpMohd Arif
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocolMohd Arif
 
Project identification
Project identificationProject identification
Project identificationMohd Arif
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniquesMohd Arif
 
Presentation
PresentationPresentation
PresentationMohd Arif
 
Pointers in c
Pointers in cPointers in c
Pointers in cMohd Arif
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peerMohd Arif
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systemsMohd Arif
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdpMohd Arif
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgetingMohd Arif
 
Network management
Network managementNetwork management
Network managementMohd Arif
 
Networing basics
Networing basicsNetworing basics
Networing basicsMohd Arif
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformMohd Arif
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and sslMohd Arif
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psecMohd Arif
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardwareMohd Arif
 

Más de Mohd Arif (20)

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
 
Project identification
Project identificationProject identification
Project identification
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
 
Presentation
PresentationPresentation
Presentation
 
Pointers in c
Pointers in cPointers in c
Pointers in c
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
 
Network management
Network managementNetwork management
Network management
 
Networing basics
Networing basicsNetworing basics
Networing basics
 
Loaders
LoadersLoaders
Loaders
 
Lists
ListsLists
Lists
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
 
Heap sort
Heap sortHeap sort
Heap sort
 

Último

Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clashcharlottematthew16
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 

Último (20)

Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Powerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time ClashPowerpoint exploring the locations used in television show Time Clash
Powerpoint exploring the locations used in television show Time Clash
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 

Virtual memory

  • 2. Observations qThe entire program does not have to be in memory, because verror handling codes are not frequently used varrays, tables, large data structures are allocated memory more than necessary and many parts are not used at the same time vsome options and cases may be used rarely qIf they are not needed, why must they be in memory?
  • 3. Benefits qProgram length is not restricted to real memory size. That is, virtual address size can be larger than physical address size. qCan run more programs because those space originally allocated for the un-loaded parts can be used by other programs. qSave load/swap I/O time because we do not have to load/swap a complete program.
  • 4. Virtual Memory qVirtual memory is the separation of user logical memory from physical memory. qThis permits to have extremely large virtual memory, which makes programming large systems much easier to use. qBecause pages can be shared, this further improves performance and save time. qVirtual memory is commonly implemented with demand paging, demand segmentation or demand paging+segmentation.
  • 5. Demand Paging virtual memory page table physical memory page frame table 0 0 i 0 0 1 v 7 2 i 1 proc A, pg 4 1 1 3 i 2 2 4 v 1 2 3 3 5 i 3 4 6 i 4 4 5 5 5 6 7 proc A, pg 1 6 6 8 7 9 process A 8 9 valid/invalid or present/absent bit
  • 6. Address Translation qThe translation from a virtual address to a physical address is the same as what a paging system does. qHowever, there is an additional check. If the page is not in physical memory (i.e., the valid bit is not set), a page fault (i.e., a trap) occurs. qIf a page fault occurs, we need to do the following: vFind an unused page frame. If no such page frame exists, a victim must be found and evicted. vWrite the old page out and load the new page in. vUpdate both page tables. vResume the interrupted instruction.
  • 7. Details of Handling a Page Fault Trap to the OS // a context switch occurs Make sure it is a page fault; If the address is not a legal one then address error, return Find an unused page frame // page replacement algorithm Write the page back to disk // page out Load the new page from disk // page in Update both page tables // two pages are involved! Resume the execution of the interrupted instruction
  • 8. Hardware Support qPage Table Base Register, Page Table Length Register, and a Page Table. qEach entry of the page table must have a valid/invalid bit. Valid means that that page is in physical memory. The address translation hardware must recognize this bit and generate a page fault if the valid bit is not set. qSecondary Memory: use a disk. qOther hardware components may be needed and will be discussed later in this chapter.
  • 9. Too Many Memory Accesses?! qEach address reference may cause at least two memory accesses, one for page table look up and the other for accessing the item. It may be worse! See below: A C How many memory accesses are there? ADD A, B, C May be more than eight! B
  • 10. Performance Issue: 1/2 qLet p be the probability of a page fault, the page fault rate, 0 ≤ p ≤ 1. qThe effective access time is (1-p)*memory access time + p*page fault time qThe page fault rate p should be small, and memory access time is usually between 10 and 200 nanoseconds. qTo complete a page fault, three components are important: vServe the page-fault trap vRead in the page, a bottleneck vRestart the process
  • 11. Performance Issue: 2/2 qSuppose memory access time is 100 nanoseconds, paging requires 25 milliseconds (software and hardware). Then, effective access time is (1-p)*100 + p*(25 milliseconds) = (1-p)*100 + p*25,000,000 nanoseconds = 100 + 24,999,900*p nanoseconds qIf the page fault rate is 1/1000, the effective access time is 25,099 nanoseconds = 25 microseconds. It is 250 times slower! qIf we wish it is only 10% slower, effective access time is no more than 110 and p=0.0000004.
  • 12. Three Important Issues in V.M. qA page table can be very large. If an address has 32 bits and page size is 4K, then there are 232/212=220=(210)2= 1M entries in a page table per process! qVirtual to physical address translation must be fast. This is done with TLB. qPage replacement. When a page fault occurs and there is no free page frame, a victim page must be selected. If the victim is not selected properly, system degradation may be high.
  • 13. Page Table Size virtual address index 1 index 2 index 3 offset page table 8 6 6 12 base register memory Only level 1 page tables level 1 will reside in page table level 2 physical memory. page table level 3 page table Other page tables will be paged-in when they are May be in virtual memory referred to.
  • 14. Page Replacement: 1/2 qThe following is a basic scheme vFind the desired page on the disk vFind a free page frame in physical memory § If there is a free page frame, use it § If there is no free page frame, use a page- replacement algorithm to find a victim page § Write this victim page back to disk and change the page table and page frame table vRead the desired page into the frame and update page table and page frame table vRestart the interrupted instruction
  • 15. Page Replacement: 2/2 qIf there is no free page frame, two page transfers (i.e., page-in and page-out) are required. qA modify bit may be added to a page table entry. The modify bit is set if that page has been modified (i.e., storing info into it). It is initialized to 0 when a page is brought into memory. qThus, if a page is not modified (i.e., modify bit = 0), it does not have to be written back to disk. qSome systems may also have a reference bit. When a page is referenced (i.e., reading or writing), its reference bit is set. It is initialized to 0 when a page is brought in. qBoth bits are set by hardware automatically.
  • 16. Page Replacement Algorithms qWe shall discuss the following page replacement algorithms: vFirst-In-First-Out - FIFO vThe Least Recently Used – LRU vThe Optimal Algorithm vThe Second Chance Algorithm vThe Clock Algorithm qThe fewer number of page faults an algorithm generates, the better the algorithm it is. qPage replacement algorithms work on page numbers. A string of such page numbers is referred to as a page reference string.
  • 17. The FIFO Algorithm qThe FIFO algorithm always selects the “oldest” page to be the victim. 0 1 2 3 0 1 4 0 1 2 3 4 3 frames 0 0 0 3 3 3 4 4 4 4 4 4 1 1 1 0 0 0 0 0 2 2 2 2 2 2 1 1 1 1 1 3 3 page fault=9 miss ratio=9/12=75% hit ratio = 25% 0 1 2 3 0 1 4 0 1 2 3 4 4 frames 0 0 0 0 0 0 4 4 4 4 3 3 1 1 1 1 1 1 0 0 0 0 4 2 2 2 2 2 2 1 1 1 1 3 3 3 3 3 3 2 2 2 page fault=10 miss ratio=10/12=83.3% hit ratio = 16.7%
  • 18. Belady Anomaly qIntuitively, increasing the number of page frames should reduce page faults. qHowever, some page replacement algorithms do not satisfy this “intuition.” The FIFO algorithm is an example. qBelady Anomaly: Page faults increase as the number of page frames increases. qFIFO was used in DEC VAX-78xx series because it is easy to implement: append the new page to the tail and select the head to be a victim!
  • 19. The LRU Algorithm: 1/2 qThe LRU algorithm always selects the page that has not been used for the longest period of time. 0 1 2 3 0 1 4 0 1 2 3 4 3 frames 0 0 0 3 3 3 4 4 4 2 2 2 1 1 1 0 0 0 0 0 0 3 3 2 2 2 1 1 1 1 1 1 4 page fault=10 miss ratio=10/12=83.3% hit ratio = 16.7% 0 1 2 3 0 1 4 0 1 2 3 4 4 frames 0 0 0 0 0 0 0 0 0 0 0 4 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 4 4 4 4 3 3 3 3 3 3 3 3 2 2 2 page fault=8 miss ratio=8/12=66.7% hit ratio = 33.3%
  • 20. The LRU Algorithm: 2/2 q The memory content of 3-frames is a subset of the memory content of 4-frames. This is the inclusion property. With this property, Belady anomaly never occurs. 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 3 3 3 4 4 4 2 2 2 1 1 1 0 0 0 0 0 0 3 3 2 2 2 1 1 1 1 1 1 4 0 1 2 3 0 1 4 0 1 2 3 4 extra 0 0 0 0 0 0 0 0 0 0 0 4 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 4 4 4 4 3 3 3 3 3 3 3 3 2 2 2
  • 21. The Optimal Algorithm: 1/2 qThe optimal algorithm always selects the page that will not be used for the longest period of time. 0 1 2 3 0 1 4 0 1 2 3 4 3 frames 0 0 0 0 0 0 0 0 0 2 2 2 1 1 1 1 1 1 1 1 1 3 3 2 3 3 3 4 4 4 4 4 4 page fault=7 miss ratio=7/12=58.3% hit ratio = 41.7% 0 1 2 3 0 1 4 0 1 2 3 4 4 frames 0 0 0 0 0 0 0 0 0 0 3 3 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 4 4 4 4 4 4 page fault=6 miss ratio=6/12=50% hit ratio = 50%
  • 22. The Optimal Algorithm: 2/2 qThe optimal algorithm always delivers the fewest page faults, if it can be implemented. It also satisfies the inclusion property (i.e., no Belady anomaly). 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 0 0 2 2 2 1 1 1 1 1 1 1 1 1 3 3 2 3 3 3 4 4 4 4 4 4 0 1 2 3 0 1 4 0 1 2 3 4 extra 0 0 0 0 0 0 0 0 0 0 3 3 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 4 4 4 4 4 4
  • 23. LRU Approximation Algorithms qFIFO has Belady anomaly, the Optimal algorithm requires the knowledge in the future, and the LRU algorithm requires accurate info of the past. qThe optimal and LRU algorithms are difficult to implement, especially the optimal algorithm. Thus, LRU approximation algorithms are needed. We will discuss three: vThe Second-Chance Algorithm vThe Clock Algorithm vThe Enhanced Second-Chance Algorithm
  • 24. The Second-Chance Algorithm: 1/3 qThe second chance algorithm is basically a FIFO algorithm. It uses the reference bit of each page. qWhen a page frame is needed, check the oldest one: vIf its reference bit is 0, take this one vOtherwise, clear the reference bit, move it to the tail, and (perhaps) set the current time. This gives it a second chance. qRepeat this procedure until a 0 reference bit page is found. Do page-out and page-in if necessary, and move it to the tail. qProblem: Page frames are moved too frequently.
  • 25. The Second-Chance Algorithm: 2/3 new page = X
  • 26. The Second-Chance Algorithm: 3/3 new page = X
  • 27. The Clock Algorithm: 1/2 qIf the second chance algorithm is implemented with a circular list, we have the clock algorithm. qWe need a “next” pointer. qWhen a page frame is needed, we examine the page under the “next” pointer: vIf its reference bit is 0, take it vOtherwise, clear the reference bit and advance the “next” pointer. qRepeat this until a 0 reference bit frame is found. qDo page-in and page-out, if necessary
  • 28. The Clock Algorithm: 2/2 new page = X
  • 29. Enhanced Second-Chance Algorithm: 1/5 qWe have four page lists based on their reference- modify bits (r,c): vQ00 - pages were not referenced and not modified. The best candidates! vQ01 - pages were changed but not recently referenced. Need a page-out. vQ10 - pages were recently used but clean. vQ11 - pages were recently used and modified. Need a page-out.
  • 30. Enhanced Second-Chance Algorithm: 2/5 qWe still need a “next” pointer. qWhen a page frame is needed: vDoes the “next” frame has 00 combination? If yes, victim is found. Otherwise, reset the reference bit and move this page to the corresponding list. vIf Q00 becomes empty, check Q01. If there is a frame with 01 combination, it is the victim. Otherwise, reset the reference bit and move the frame to the corresponding list. vIf Q01 becomes empty, move Q10 to Q00 and Q11 to Q01. Restart the scanning process.
  • 31. Enhanced Second-Chance Algorithm: 3/5 Q00 Q01 Q10 Q11 1 11 5 10 8 11 10 11 2 10 6 11 9 11 11 11 3 11 7 10 12 11 4 11 Q00 Q01 Q10 Q11 5 10 8 11 10 11 6 11 9 11 11 11 7 10 2 00 12 11 1 01 3 01 4 01
  • 32. Enhanced Second-Chance Algorithm: 4/5 Q00 Q01 Q10 Q11 5 10 8 11 10 11 6 11 9 11 11 11 7 10 2 00 12 11 1 01 3 01 4 01 Q00 Q01 Q10 Q11 8 11 10 11 9 11 11 11 2 00 12 11 5 00 1 01 7 00 3 01 4 01 6 01
  • 33. Q00 Q01 Q10 Q11 8 11 10 11 9 11 11 11 This algorithm was used 2 00 12 11 5 in IBM DOS/VS and 00 1 01 7 MacOS! 00 3 01 4 01 6 01 Q00 Q01 Q10 Q11 2 00 10 11 8 01 5 00 11 11 9 01 7 00 12 11 1 01 3 01 4 01 6 01
  • 34. Other Important Issues qGlobal vs. Local Allocation qLocality of Reference qThrashing qThe Working Set Model qThe Working Set Clock Algorithm qPage-Fault Frequency Replacement Algorithm
  • 35. Global vs. Local Replacement qGlobal replacement allows a process to select a victim from the set of all page frames, even if the page frame is currently allocated to another process. qLocal replacement requires that each process selects a victim from its own set of allocated frames. qWith a global replacement, the number of frames allocated to a process may change over time. With a local replacement algorithm, it is likely a constant.
  • 36. Global vs. Local: A Comparison qWith a global replacement algorithm, a process cannot control its own page fault rate, because the behavior of a process depends on the behavior of other processes. The same process running on a different system may have a totally different behavior. qWith a local replacement algorithm, the set of pages of a process in memory is affected by the paging behavior of that process only. A process does not have the opportunity of using other less used frames. Performance may be lower. q With a global strategy, throughput is usually higher, and is commonly used.
  • 37. Locality of Reference qDuring any phase of execution, the process references only a relatively small fraction of pages.
  • 38. Thrashing qA high paging activity is called thrashing. This means a process is spending more time paging than executing (i.e., low CPU utilzation). qIf CPU utilization is too low, the medium-term scheduler is invoked to swap in one or more swapped-out processes or bring in one or more new jobs. The number of processes in memory is referred to as the degree of multiprogramming.
  • 39. Degree of Multiprogramming: 1/3 qWe cannot increase the degree of multiprogramming arbitrarily as throughput will drop at certain point and thrashing occurs. qTherefore, the medium-term scheduler must maintain the optimal degree of multiprogramming.
  • 40. Degree of Multiprogramming: 2/3 1. Suppose we use a global strategy and the CPU utilization is low. The medium-term scheduler will add a new process. 2. Suppose this new process requires more pages. It starts to have more page faults and page frames of other processes will be taken by this process. 3. Other processes also need these page frames. Thus, they start to have more page faults. 4. Because pages must be paged- in and out, these processes must wait, and the number of processes in the ready queue drops. CPU utilization is lower.
  • 41. Degree of Multiprogramming: 3/3 5. Consequently, the medium-term scheduler brings in more processes into memory. These new processes also need page frames to run, causing more page faults. 6. Thus, CPU utilization drops further, causing the medium-term scheduler to bring in even more processes. 7. If this cycle continues, the page fault rate increases dramatically, and the effective memory access time increases. Eventually, the system is paralyzed because the processes are spending almost all time to do paging!
  • 42. The Working Set Model: 1/4 qThe working set of a process at virtual time t, written as W(t,θ), is the set of pages that were referenced in the interval (t- θ, t], where θ is the window size. qθ = 3. The result is identical to that of LRU: 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 3 3 3 4 4 4 2 2 2 1 1 1 0 0 0 0 0 0 3 3 2 2 2 1 1 1 1 1 1 4 page fault=10 miss ratio=10/12=83.3% hit ratio = 16.7%
  • 43. The Working Set Model: 2/4 qHowever, the result of θ = 4 is different from that of LRU. 0 1 2 3 0 1 4 0 1 2 3 4 0 0 0 0 0 0 0 0 0 0 0 4 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 4 4 4 4 3 3 3 3 3 3 2 2 2 page fault=8 miss ratio=8/12=66.7% hit ratio = 33.3% only three pages here
  • 44. The Working Set Model: 3/4 qThe Working Set Policy: Find a good θ, and keep W(t,θ) in memory for every t. qWhat is the best value of θ? This is a system tuning issue. This value can change as needed from time to time.
  • 45. The Working Set Model: 4/4 qUnfortunately, like LRU, the working set policy cannot be implemented directly, and an approximation is necessary. qA commonly used algorithm is the Working Set Clock algorithm, WSClock. This is a good and efficient approximation.
  • 47. The Page-Fault Frequency Algorithm: 1/2 qSince thrashing is due to high page-fault rate, we can control thrashing by controlling page-fault rate. qIf the page-fault rate of a process is too high, this process needs more page frames. On the other hand, if the page-fault rate is too low, this process may have too many page frames. qTherefore, if we can always maintain the page- fault rate of a process to certain level, we control the number of page frames that process can have.
  • 48. The Page-Fault Frequency Algorithm: 2/2 qWe establish an upper bound and a lower bound, and monitor the page-fault rate periodically. qIf the rate is higher (resp., lower) than the upper (resp., lower) bound, a new (resp., existing) page is allocated to (resp., removed from) this process.