3. Operating System
• An operating system (OS) is a collection of software
that manages computer hardware resources and
provides common services for computer programs.
• The operating system is a
vital component of the system
software in a computer
system
•Ex; Android, iOS, Linux, OS X,
QNX, Microsoft Windows,[3]
Windows Phone, and IBM
z/OS prepared by Visakh V,Assistant Professor,
LBSITW
6. Memory management
• The memory management portion of the
Operating System
• It is responsible for the efficient usage of main memory,
especially in a multiprogramming environment where
processes contend for memory.
• It must also offer protection of one process address space
from another (including protection of system address space
from user processes).
prepared by Visakh V,Assistant Professor,
LBSITW
9. Sharing Memory
Issues
Allocation schemes
Protection from each other
Protecting OS code
Translating logical addresses to
physical
Swapping programs
What if physical memory is small:
Virtual
memory
prepared by Visakh V,Assistant Professor,
LBSITW
10. Memory management
Schemes
• Single contiguous memory
allocation
• Fixed partition memory allocation
• Variable partition memory
allocation
prepared by Visakh V,Assistant Professor,
LBSITW
11. Single contiguous memory allocation
• The user’s job is assigned the complete
control of the CPU until the job completes or
an error occurs
• During this time, the user’s job is the only
program which would reside in memory
apart from the operating system
prepared by Visakh V,Assistant Professor,
LBSITW
12. • Case1: Denotes a scenario
where in the user’s job
occupies the complete
available memory (the upper
part of the memory is however,
reserved for the OS program)
prepared by Visakh V,Assistant Professor,
LBSITW
13. • Case2 : 30K of memory which is free but the new job cannot
be put in memory because of the single contiguous allocation
technique. From the above figure and the related discussion,
the following advantages and disadvantages of single
contiguous allocation technique can be seen.
prepared by Visakh V,Assistant Professor,
LBSITW
14. Advantages:
• It is simple to implement
Disadvantages:
• It leads to wastage of memory which is called fragmentation
• This memory management technique would lead to
uniprogramming. Hence it cannot be used for
multiprogramming.
• It leads to wastage of CPU time (wastage of time). When the
current job in memory is waiting for an input or output
operation the CPU is left idle
prepared by Visakh V,Assistant Professor,
LBSITW
15. Fixed partition memory allocation
• The memory is divided into various partitions
each of fixed size.
• This would allow several user jobs to reside in
the memory
prepared by Visakh V,Assistant Professor,
LBSITW
16. Case1: There are three jobs residing in
memory each of which fits exactly into the
respective partitions. One more partitioned
memory is available for a user job. This type
of fixed partition allocation supports
multiprogramming.
prepared by Visakh V,Assistant Professor,
LBSITW
17. Case2: Suppose that a new job of size 40K
arrives for execution. It can be seen that the
total amount of free memory is 40K but the
new job cannot fit in to memory for
execution because of lack of contiguous
free space.
Case2 leads to external
fragmentation wherein there is
enough free memory for a new
job but they are not contiguous
prepared by Visakh V,Assistant Professor,
LBSITW
18. Case3: This depicts a scenario where job 4
is allocated a memory partition of 20K but
it has occupied only 10K of this memory
partition and the remaining 10K is unused.
Case3 leads to internal
fragmentation wherein there is
an unused part of memory
internal to a memory partition.
prepared by Visakh V,Assistant Professor,
LBSITW
19. Advantages:
• provide multiprogramming
Disadvantages:
• internal and external fragmentation
of memory
prepared by Visakh V,Assistant Professor,
LBSITW
20. Variable partition memory allocation
• There is no pre-determined (fixed)
partitioning of memory.
• This technique allocates the exact
amount of memory required for a job
prepared by Visakh V,Assistant Professor,
LBSITW
31. History
• According to Donald Knuth, the buddy system was
invented in 1963 by Harry Markowitz, who won the
1990 Nobel Memorial Prize in Economics.
• It was first described by Kenneth C.
Knowlton(published 1965).
• Now a days Linux uses the buddy system to
manage allocation of memory, possibly
because it is allocating many structures which
are already powers of two, like frames.
prepared by Visakh V,Assistant Professor,
LBSITW
32. INTRODUCTION
• The buddy memory allocation technique is a memory
allocation algorithm that divides memory into partitions
to try to satisfy a memory request as suitably as
possible.
• This system makes use of splitting memory into halves
to try to give a best-fit.
• Compared to the more complex memory allocation
techniques that some modern operating systems use,
buddy memory allocation is relatively easy to
implement.
• It supports limited but efficient splitting and coalescing
of memory blocks.
prepared by Visakh V,Assistant Professor,
LBSITW
33. Why Buddy System?
• A fixed partitioning scheme limits the number of
active processes and may use space inefficiently if
there is a poor match between available partition size
and process size
• A dynamic partitioning scheme is more complex to
maintain and includes the overhead of compaction.
• An interesting compromise of fixed and dynamic
partitioning is the buddy system.
prepared by Visakh V,Assistant Professor,
LBSITW
34. What are Buddies….?
• The buddy system(binary) allows a single allocation
block to be split, to form two blocks half the size of the
parent block. These two blocks are known as 'buddies'.
• Part of the definition of a 'buddy' is that the buddy of
block B must be the same size as B, and must be adjacent
in memory (so that it is possible to merge them later).
• The other important property of buddies, stems from the
fact that in the buddy system, every block is at an address
in memory which is exactly divisible by its size.
• So all the 16-byte blocks are at addresses which are
multiples of 16; all the 64K blocks are at addresses which
are multiples of 64K... and so on.
prepared by Visakh V,Assistant Professor,
LBSITW
35. TYPES OF BUDDY SYSTEM
• There are number of buddy systems, proposed by
researcher, which are capable of reducing
execution time and increase memory utilization.
Four Types of Buddy System
• Binary buddy system
• Fibonacci buddy system
• Weighted buddy system
• Tertiary buddy system
prepared by Visakh V,Assistant Professor,
LBSITW
36. How it Differs?
• These three Buddy Systems are similar in the
design of the algorithm, the major difference is
the sizes of the memory blocks.
• It also differs in memory utilization and
execution time.
• In some situations, one buddy system looks
good, may not be good in other situation.
• It simply lies on the requests for memory
which causes external and internal
fragmentation higher at some situations.
prepared by Visakh V,Assistant Professor,
LBSITW
37. BINARY BUDDY SYSTEM
• In binary buddy system the memory block of 2m is
into two equal parts of 2m-1.
• It satisfies the following recurrence relation
Li = Li-1+ Li-1
8
4
4
2 2
2 2
prepared by Visakh V,Assistant Professor,
LBSITW
38. Binary Buddy System
• The memory consists of a collection of blocks of
consecutive memory, each of which is a power of
two in size.
• Each block is marked either occupied or free,
depending on whether it is allocated to the user.
• For each block we also know its size .
• The system provides two operations for
supporting dynamic memory allocation:
• 1. Allocate (2k): Finds a free block of size 2k,
marks it as occupied, and returns a pointer to it.
• 2. Deallocate (B): Marks the previously allocated
block B as free and may merge it with others to
form a larger free block.
prepared by Visakh V,Assistant Professor,
LBSITW
39. Allocation in Binary Buddy System
• The buddy system maintains a list of the free blocks of
each size (called a free list), so that it is easy to find a
block of the desired size, if one is available.
• If no block of the requested size is available, Allocate
searches for the first nonempty list for blocks of at
least the size requested.
• In either case, a block is removed from the free list.
• This process of finding a large enough free block will
indeed be the most difficult operation for us to perform
quickly.
prepared by Visakh V,Assistant Professor,
LBSITW
40. • If the found block is larger than the requested
size, say 2k instead of the desired 2i, then the
block is split in half, making two blocks of size
2k−1.
• If this is still too large (k − 1 > i),then one of the
blocks of size 2k−1 is split in half.
• This process is repeated until we have blocks of
size 2k−1, 2k−2, . . . , 2i+1, 2i, and 2i.
• Then one of the blocks of size 2i is marked as
occupied and returned to the user.
• The others are added to the appropriate free lists.
• Each block B1 was created by splitting another block into
two halves, call them B1 (Buddy of B2) and
B2(Buddy of B1). prepared by Visakh V,Assistant Professor,
LBSITW
41. Deallocation of Binary Buddy System
• Now when a block is deallocated, the buddy system checks
whether the block can be merged with any others or more
precisely whether we can undo any splits that were
performed to make this block.
• The merging process checks whether the buddy of a
deallocated block is also free, in which case the two blocks are
merged;
• then it checks whether the buddy of the resulting block is also
free, in which case they are merged; and so on.
prepared by Visakh V,Assistant Professor,
LBSITW
42. Block Header In Buddy System
• Thus it is crucial for performance purposes to
know, given a block address, the size of the block
and whether it is occupied.
• This is usually done by storing a block header in
the first few bits of the block.
• More precisely, we use headers in which the first
bit is the occupied bit , and the remaining bits
specify the size of the block.
• Eg) To determine whether the buddy of a block is
free, we compute the buddy’s address, look at the
first bit at this address, and also check that the two
sizes match.
prepared by Visakh V,Assistant Professor,
LBSITW
43. Example:
Let us consider 1-Mbyte of memory is allocated using
Buddy System. Show the Binary tree form and list form for the
following :
Request 100k(A)
Request 240k(B)
Request 64k(C)
Request 256k(D)
Release B
Release A
Request 75k
Release C
Release E
Release D.
prepared by Visakh V,Assistant Professor,
LBSITW
45. A=128 C=64 64 256 D=256 256
Unused
memory
Used
memory
prepared by Visakh V,Assistant Professor,
LBSITW
46. FIBONACCI BUDDY SYSTEM
• Hirschberg taking Knuth's suggestion has designed a Fibonacci
buddy system with block sizes which are Fibonacci numbers.
• It satisfies the following recurrence relation :
Li=Li-1 + Li-2.
• 0, 1,1, 2, 3, 5, 8,13, 21, 34, 55,89, 144, 233,377, 610, 987, 1597,
2582…
610
233
377
233 144 144 89
prepared by Visakh V,Assistant Professor,
LBSITW
47. Advantages of Buddy System
• Less external fragmentation.
• Search for a block of the right size is cheaper than,
best fit because we need only find the first available
block on the block list for blocks of size 2k;
• Merging adjacent free blocks is easy.
• In buddy systems, the cost to allocate and free a block
of memory is low compared to that of best-fit or first-fit
algorithms.
prepared by Visakh V,Assistant Professor,
LBSITW
48. Disadvantages of Buddy System
• It allows internal fragmentation.
• For example, a request for 515k will require a block
of size 1024k. In consequence, such an approach
gives a waste of 509 k.
• Splitting and merging adjacent areas is a recurrent
operation and thus very unpredictable and inefficient.
• The another drawback of the buddy system is the
time required to fragment and merge blocks.
prepared by Visakh V,Assistant Professor,
LBSITW
50. Freeing Memory
• Whenever a memory block is freed, it has to
be added to the free list*.
• Sometimes there will be lot of small free
blocks on the free list and it won’t be possible
to fulfill a request for a large memory block
*A free list : is a data structure used in a scheme for dynamic
memory allocation. It operates by connecting unallocated regions
of memory together in a linked list, using the first word of each
unallocated region as a pointer to the next.
prepared by Visakh V,Assistant Professor,
LBSITW
51. • At the time of allocation bigger blocks are split into
smaller blocks, so we need some combination
procedure that combines free blocks into large blocks.
• If any neighbor of the freed block is free , then we
can remove it from the free list and combine these
contiguous free blocks to form a larger free block and
put this larger free block on the free list.
• Usually free list is arranged in order of increasing
memory address. prepared by Visakh V,Assistant Professor,
LBSITW
53. Boundary Tag Method
•In this, there is no need to traverse the free list for
finding the address and free status of adjacent free
blocks.
• to achieve this we need to store some extra
information in all blocks
• When a block is freed we need to locate its left and
right neighbors and find whether they are free or
not.
prepared by Visakh V,Assistant Professor,
LBSITW
54. In approach a each block is bracketed with size and status of that
block, thus allowing one end of any block to be found from the other,
and allowing the status of a block to be inspected from either end.
In approach b each block is prefixed with its status and with the
address of each of its neighbors.
prepared by Visakh V,Assistant Professor,
LBSITW
55. Tag : descriptive information associated with a block
of data is called a tag
boundary tag algorithms:
These tag are stored in the boundaries between
adjacent blocks and these approaches are called
boundary tag algorithms
prepared by Visakh V,Assistant Professor,
LBSITW
56. If the block is Free 0 , Otherwise 1
Address of previous free block
Address of next free block
Block size
prepared by Visakh V,Assistant Professor,
LBSITW
61. Compaction
• After repeated allocation and de-allocation of blocks ,
the memory becomes fragmented.
•Compaction is a technique that joins the non contiguous
free memory blocks to form one large blocks so that the
total free memory becomes contiguous.
• All the memory blocks that are in use are moved
towards the beginning of the memory.
prepared by Visakh V,Assistant Professor,
LBSITW
64. Garbage Collection
•Garbage : It refers those memory blocks that
are allocated but not in use
• Garbage collection techniques is used to
recognize garbage blocks and automatically free
them.
• It is also known as automatic memory
management
prepared by Visakh V,Assistant Professor,
LBSITW
65. • The main work of a garbage collector is to differentiate
between garbage and non garbage blocks and return the
garbage blocks to the free list.
•Two common appraches of garbage collection are
i. Reference Counting
ii. Mark and Sweep
prepared by Visakh V,Assistant Professor,
LBSITW
66. i. Reference Counting
• Each allocated block contain a reference count
• Reference count : Which indicates the
number of pointers points to this block
• Each time
•Incremented : We create or copy a pointer
to the block
•Decremented: when a pointer to the block
s destroyed
• When the reference count of an object becomes zero ,
it becomes unreachable and is considered as garbage.
• The garbage block is immediately made reusable by
placing it on the free list
prepared by Visakh V,Assistant Professor,
LBSITW
67. A block memory
is freed as soon
as it becomes
garbage
It cannot handle
cyclic reference
correctly
prepared by Visakh V,Assistant Professor,
LBSITW
68. ii. Mark and Sweep
•The mark and sweep garbage
collector is run when the system is
very low on memory and it is not
possible to allocate any space for
the user.
• All the application programs
come to halt temporarily when
this garbage collector runs.
prepared by Visakh V,Assistant Professor,
LBSITW
69. • This takes place in two phase
• Mark Phase
All the non garbage blocks are
marked
• Sweep Phase
The collector sweeps over the memory
and returns all the unmarked (garbage)
blocks to the freelist [No movement of
blocks here!!!!] prepared by Visakh V,Assistant Professor,
LBSITW
70. a) Can handle
cyclic reference
b) No overhead of
maintaining
reference
variable
a)It uses stop the
world approaches
b) Thrashing
occurs when
most of the
memory is
prepared by Visakh V,Abssisetanit Pnrofegssor, used
LBSITW