3. Making the Execution of
Programs Faster
Use faster circuit technology to build the
processor and the main memory.
Arrange the hardware so that more than one
operation can be performed at the same time.
In the latter way, the number of operations
performed per second is increased even
though the elapsed time needed to perform
any one operation is not changed.
4. Traditional Pipeline Concept
Laundry Example
Ann, Brian, Cathy, Dave
each have one load of clothes
to wash, dry, and fold
Washer takes 30 minutes
Dryer takes 40 minutes
“Folder” takes 20 minutes
A B C D
5. Traditional Pipeline Concept
Sequential laundry takes 6
hours for 4 loads
If they learned pipelining,
how long would laundry
take?
A
B
C
D
30 40 20 30 40 20 30 40 20 30 40 20
6 PM 7 8 9 10 11 Midnight
Time
7. Instruction Pipelining
First stage fetches the instruction and
buffers it.
When the second stage is free, the first
stage passes it the buffered instruction.
While the second stage is executing the
instruction, the first stage takes
advantages of any unused memory cycles
to fetch and buffer the next instruction.
This is called instruction prefetch or
fetch overlap.
8. Inefficiency in two stage
instruction pipelining
There are two reasons
• The execution time will generally be longer than the
fetch time. Thus the fetch stage may have to wait for
some time before it can empty the buffer.
• When conditional branch occurs, then the address of
next instruction to be fetched become unknown. Then
the execution stage have to wait while the next
instruction is fetched.
9. Two stage instruction pipelining
Simplified view
wait new address wait
Instruction Instruction
Result
discard EXPANDED VIEW
Fetch Execute
10. Use the Idea of Pipelining in a
Computer
F
1
E
1
F
2
E
2
F
3
E
3
I1 I2 I3
(a) Sequential execution
Instruction
fetch
unit
Execution
unit
Interstage buffer
B1
(b) Hardware organization
Time
F1 E1
F2 E2
F3 E3
I1
I2
I3
Instruction
(c) Pipelined execution
Figure 8.1. Basic idea of instruction pipelining.
Clock cycle 1 2 3 4
Time
Fetch + Execution
11. Decomposition of instruction
processing
To gain further speedup, the pipeline have
more stages(6 stages)
Fetch instruction(FI)
Decode instruction(DI)
Calculate operands (i.e. EAs)(CO)
Fetch operands(FO)
Execute instructions(EI)
Write operand(WO)
12. Use the Idea of Pipelining in a
Computer
F4I4
F1
F2
F3
I1
I2
I3
D1
D2
D3
D4
E1
E2
E3
E4
W1
W2
W3
W4
Instruction
Figure 8.2. A 4stage pipeline.
Clock cycle 1 2 3 4 5 6 7
(a) Instruction execution divided into four steps
F : Fetch
instruction
D : Decode
instruction
and fetch
operands
E: Execute
operation
W : Write
results
Interstage buffers
(b) Hardware organization
B1 B2 B3
Time
Fetch + Decode
+ Execution + Write
Textbook page: 457
13. SIX STAGE OF INSTRUCTION PIPELINING
Fetch Instruction(FI)
Read the next expected instruction into a buffer
Decode Instruction(DI)
Determine the opcode and the operand specifiers.
Calculate Operands(CO)
Calculate the effective address of each source operand.
Fetch Operands(FO)
Fetch each operand from memory. Operands in registers
need not be fetched.
Execute Instruction(EI)
Perform the indicated operation and store the result
Write Operand(WO)
Store the result in memory.
15. High efficiency of instruction pipelining
Assume all the below in diagram
• All stages will be of equal duration.
• Each instruction goes through all the six stages
of the pipeline.
• All the stages can be performed parallel.
• No memory conflicts.
• All the accesses occur simultaneously.
In the previous diagram the instruction
pipelining works very efficiently and give high
performance
16. Limits to performance enhancement
The factors affecting the performance are
1. If six stages are not of equal duration, then there will be
some waiting time at various stages.
2. Conditional branch instruction which can invalidate
several instruction fetches.
3. Interrupt which is unpredictable event.
4. Register and memory conflicts.
5. CO stage may depend on the contents of a register that
could be altered by a previous instruction that is still in
pipeline.
18. Conditional branch instructions
Assume that the instruction 3 is a conditional branch
to instruction 15.
Until the instruction is executed there is no way of
knowing which instruction will come next
The pipeline will simply loads the next instruction in
the sequence and execute.
Branch is not determined until the end of time unit 7.
During time unit 8,instruction 15 enters into the
pipeline.
No instruction complete during time units 9 through
12.
This is the performance penalty incurred because we
could not anticipate the branch.
20. Role of Cache Memory
Each pipeline stage is expected to complete in one
clock cycle.
The clock period should be long enough to let the
slowest pipeline stage to complete.
Faster stages can only wait for the slowest one to
complete.
Since main memory is very slow compared to the
execution, if each instruction needs to be fetched
from main memory, pipeline is almost useless.
Fortunately, we have cache.
21. Pipeline Performance
The potential increase in performance
resulting from pipelining is proportional to the
number of pipeline stages.
However, this increase would be achieved
only if all pipeline stages require the same
time to complete, and there is no interruption
throughout program execution.
Unfortunately, this is not true.
23. Quiz
Four instructions, the I2 takes two clock
cycles for execution. Pls draw the figure for 4-
stage pipeline, and figure out the total cycles
needed for the four instructions to complete.
24. Pipeline Performance
The previous pipeline is said to have been stalled for two clock
cycles.
Any condition that causes a pipeline to stall is called a hazard.
Data hazard
– any condition in which either the source or the destination operands of an
instruction are not available at the time expected in the pipeline. So some
operation has to be delayed, and the pipeline stalls.
Instruction (control) hazard –
a delay in the availability of an instruction causes the pipeline to stall.
Structural hazard –
the situation when two instructions require the use of a given hardware
resource at the same time.
27. Pipeline Performance
Again, pipelining does not result in individual
instructions being executed faster; rather, it is the
throughput that increases.
Throughput is measured by the rate at which
instruction execution is completed.
Pipeline stall causes degradation in pipeline
performance.
We need to identify all hazards that may cause the
pipeline to stall and to find ways to minimize their
impact.
28. Data Hazards Example
We must ensure that the results obtained when instructions are
executed in a pipelined processor are identical to those obtained
when the same instructions are executed sequentially.
Hazard occurs
A ← 3 + A
B ← 4 × A
No hazard
A ← 5 × C
B ← 20 + C
When two operations depend on each other, they must be
executed sequentially in the correct order.
Another example:
Mul R2, R3, R4
Add R5, R4, R6
30. Handling Data Hazards in
Software
Let the compiler detect and handle the
hazard:
I1: Mul R2, R3, R4
NOP
NOP
I2: Add R5, R4, R6
The compiler can reorder the instructions to
perform some useful work during the NOP
slots.