SlideShare una empresa de Scribd logo
1 de 30
Descargar para leer sin conexión
Iris Hui-Ru Jiang Fall 2014
CHAPTER 2
ALGORITHM ANALYSIS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Computational tractability
¤ Asymptotic order of growth
¤ Implementing Gale-Shapley algorithm
¤ Survey on common running times (Self-study)
¤ Priority queues (Postponed to Section 4.5)
¨ Reading:
¤ Chapter 2
Complexity
2
IRIS H.-R. JIANG
Computational Efficiency
¨ Q: What is a good algorithm?
¨ A:
¤ Correct: proofs
¤ Efficient: run quickly and consume small memory
n Efficiency Û resource requirement Û complexity
¨ Q: How does the resource requirements scale with increasing
input size?
¤ Time: running time
¤ Space: memory usage
Complexity
3
IRIS H.-R. JIANG
Running Time
Complexity
Analytical Engine
As soon as an Analytical Engine exists,
it will necessarily guide the future course of the science.
Whenever any result is sought by its aid, the question will then arise —
by what course of calculation can these results be arrived at by the machine
in the shortest time?
-- Charles Babbage (1864)
how many times
do you have to
turn the crank?
Charles Babbage
4
Iris Hui-Ru Jiang Fall 2014
CHAPTER 2
ALGORITHM ANALYSIS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Computational tractability
¤ Asymptotic order of growth
¤ Implementing Gale-Shapley algorithm
¤ Survey on common running times (Self-study)
¤ Priority queues (Postponed to Section 4.5)
¨ Reading:
¤ Chapter 2
Complexity
2
IRIS H.-R. JIANG
Computational Efficiency
¨ Q: What is a good algorithm?
¨ A:
¤ Correct: proofs
¤ Efficient: run quickly and consume small memory
n Efficiency Û resource requirement Û complexity
¨ Q: How does the resource requirements scale with increasing
input size?
¤ Time: running time
¤ Space: memory usage
Complexity
3
IRIS H.-R. JIANG
Running Time
Complexity
Analytical Engine
As soon as an Analytical Engine exists,
it will necessarily guide the future course of the science.
Whenever any result is sought by its aid, the question will then arise —
by what course of calculation can these results be arrived at by the machine
in the shortest time?
-- Charles Babbage (1864)
how many times
do you have to
turn the crank?
Charles Babbage
4
Iris Hui-Ru Jiang Fall 2014
CHAPTER 2
ALGORITHM ANALYSIS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Computational tractability
¤ Asymptotic order of growth
¤ Implementing Gale-Shapley algorithm
¤ Survey on common running times (Self-study)
¤ Priority queues (Postponed to Section 4.5)
¨ Reading:
¤ Chapter 2
Complexity
2
IRIS H.-R. JIANG
Computational Efficiency
¨ Q: What is a good algorithm?
¨ A:
¤ Correct: proofs
¤ Efficient: run quickly and consume small memory
n Efficiency Û resource requirement Û complexity
¨ Q: How does the resource requirements scale with increasing
input size?
¤ Time: running time
¤ Space: memory usage
Complexity
3
IRIS H.-R. JIANG
Running Time
Complexity
Analytical Engine
As soon as an Analytical Engine exists,
it will necessarily guide the future course of the science.
Whenever any result is sought by its aid, the question will then arise —
by what course of calculation can these results be arrived at by the machine
in the shortest time?
-- Charles Babbage (1864)
how many times
do you have to
turn the crank?
Charles Babbage
4
Iris Hui-Ru Jiang Fall 2014
CHAPTER 2
ALGORITHM ANALYSIS
IRIS H.-R. JIANG
Outline
¨ Content:
¤ Computational tractability
¤ Asymptotic order of growth
¤ Implementing Gale-Shapley algorithm
¤ Survey on common running times (Self-study)
¤ Priority queues (Postponed to Section 4.5)
¨ Reading:
¤ Chapter 2
Complexity
2
IRIS H.-R. JIANG
Computational Efficiency
¨ Q: What is a good algorithm?
¨ A:
¤ Correct: proofs
¤ Efficient: run quickly and consume small memory
n Efficiency Û resource requirement Û complexity
¨ Q: How does the resource requirements scale with increasing
input size?
¤ Time: running time
¤ Space: memory usage
Complexity
3
IRIS H.-R. JIANG
Running Time
Complexity
Analytical Engine
As soon as an Analytical Engine exists,
it will necessarily guide the future course of the science.
Whenever any result is sought by its aid, the question will then arise —
by what course of calculation can these results be arrived at by the machine
in the shortest time?
-- Charles Babbage (1864)
how many times
do you have to
turn the crank?
Charles Babbage
4
Defining efficiency
Computational Tractability
Complexity
5
IRIS H.-R. JIANG
Proposed Definition 1 (1/2)
¨ Q: How should we turn the fuzzy notion of an “efficient”
algorithm into something more concrete?
¨ A: Proposed definition 1: An algorithm is efficient if, when
implemented, it runs quickly on real input instances.
¨ Q: What’s wrong?
¨ A: Crucial things are missing…
¤ Where and how well do we implement an algorithm?
n Bad algorithms can run quickly with small cases on fast
machines.
¤ What is a real input instance?
n We don’t know the full range of input instances in practice
¤ How well, or badly, may an algorithm scale as problem sizes
grow to unexpected levels.
n Two algorithms perform comparably on inputs of size 100;
multiply the input size tenfold, and one will still run quickly,
while the other consumes a huge amount of time.
Complexity
6
IRIS H.-R. JIANG
Proposed Definition 1 (2/2)
¨ Q: What are we asking for?
¨ A: A concrete definition of efficiency
¤ Platform-independent
¤ Instance-independent
¤ Of predictive value w.r.t. increasing input sizes
¨ E.g., the stable matching problem
¤ Input size: N = the total size of preference lists
n n men & n women: 2n lists, each of length n
n N = 2n2
¤ Describe an algorithm at a high level
¤ Analyze its running time mathematically as a function of N
Complexity
7
IRIS H.-R. JIANG
Proposed Definition 2 (1/2)
¨ We will focus on the worst-case running time
¤ A bound on the largest possible running time the algorithm could
have over all inputs of a given size N.
¨ Q: How about average-case analysis?
¤ The performance averaged over random instances
¨ A: But, how to generate a random input?
¤ We don’t know the full range of input instances
n Good on one; poor on another
¨ Q: On what can we say a running-time bound is good based?
¨ A: Compare with brute-force search over the solution space
¤ Try all possibilities and see if any one of them works.
¤ One thing in common: a compact representation implicitly
specifies a giant search space Þ brute-force is too slow!
n Typically takes 2N time or worse for inputs of size N.
n Too naïve to be acceptable in practice.
Complexity
8
Defining efficiency
Computational Tractability
Complexity
5
IRIS H.-R. JIANG
Proposed Definition 1 (1/2)
¨ Q: How should we turn the fuzzy notion of an “efficient”
algorithm into something more concrete?
¨ A: Proposed definition 1: An algorithm is efficient if, when
implemented, it runs quickly on real input instances.
¨ Q: What’s wrong?
¨ A: Crucial things are missing…
¤ Where and how well do we implement an algorithm?
n Bad algorithms can run quickly with small cases on fast
machines.
¤ What is a real input instance?
n We don’t know the full range of input instances in practice
¤ How well, or badly, may an algorithm scale as problem sizes
grow to unexpected levels.
n Two algorithms perform comparably on inputs of size 100;
multiply the input size tenfold, and one will still run quickly,
while the other consumes a huge amount of time.
Complexity
6
IRIS H.-R. JIANG
Proposed Definition 1 (2/2)
¨ Q: What are we asking for?
¨ A: A concrete definition of efficiency
¤ Platform-independent
¤ Instance-independent
¤ Of predictive value w.r.t. increasing input sizes
¨ E.g., the stable matching problem
¤ Input size: N = the total size of preference lists
n n men & n women: 2n lists, each of length n
n N = 2n2
¤ Describe an algorithm at a high level
¤ Analyze its running time mathematically as a function of N
Complexity
7
IRIS H.-R. JIANG
Proposed Definition 2 (1/2)
¨ We will focus on the worst-case running time
¤ A bound on the largest possible running time the algorithm could
have over all inputs of a given size N.
¨ Q: How about average-case analysis?
¤ The performance averaged over random instances
¨ A: But, how to generate a random input?
¤ We don’t know the full range of input instances
n Good on one; poor on another
¨ Q: On what can we say a running-time bound is good based?
¨ A: Compare with brute-force search over the solution space
¤ Try all possibilities and see if any one of them works.
¤ One thing in common: a compact representation implicitly
specifies a giant search space Þ brute-force is too slow!
n Typically takes 2N time or worse for inputs of size N.
n Too naïve to be acceptable in practice.
Complexity
8
Defining efficiency
Computational Tractability
Complexity
5
IRIS H.-R. JIANG
Proposed Definition 1 (1/2)
¨ Q: How should we turn the fuzzy notion of an “efficient”
algorithm into something more concrete?
¨ A: Proposed definition 1: An algorithm is efficient if, when
implemented, it runs quickly on real input instances.
¨ Q: What’s wrong?
¨ A: Crucial things are missing…
¤ Where and how well do we implement an algorithm?
n Bad algorithms can run quickly with small cases on fast
machines.
¤ What is a real input instance?
n We don’t know the full range of input instances in practice
¤ How well, or badly, may an algorithm scale as problem sizes
grow to unexpected levels.
n Two algorithms perform comparably on inputs of size 100;
multiply the input size tenfold, and one will still run quickly,
while the other consumes a huge amount of time.
Complexity
6
IRIS H.-R. JIANG
Proposed Definition 1 (2/2)
¨ Q: What are we asking for?
¨ A: A concrete definition of efficiency
¤ Platform-independent
¤ Instance-independent
¤ Of predictive value w.r.t. increasing input sizes
¨ E.g., the stable matching problem
¤ Input size: N = the total size of preference lists
n n men & n women: 2n lists, each of length n
n N = 2n2
¤ Describe an algorithm at a high level
¤ Analyze its running time mathematically as a function of N
Complexity
7
IRIS H.-R. JIANG
Proposed Definition 2 (1/2)
¨ We will focus on the worst-case running time
¤ A bound on the largest possible running time the algorithm could
have over all inputs of a given size N.
¨ Q: How about average-case analysis?
¤ The performance averaged over random instances
¨ A: But, how to generate a random input?
¤ We don’t know the full range of input instances
n Good on one; poor on another
¨ Q: On what can we say a running-time bound is good based?
¨ A: Compare with brute-force search over the solution space
¤ Try all possibilities and see if any one of them works.
¤ One thing in common: a compact representation implicitly
specifies a giant search space Þ brute-force is too slow!
n Typically takes 2N time or worse for inputs of size N.
n Too naïve to be acceptable in practice.
Complexity
8
Defining efficiency
Computational Tractability
Complexity
5
IRIS H.-R. JIANG
Proposed Definition 1 (1/2)
¨ Q: How should we turn the fuzzy notion of an “efficient”
algorithm into something more concrete?
¨ A: Proposed definition 1: An algorithm is efficient if, when
implemented, it runs quickly on real input instances.
¨ Q: What’s wrong?
¨ A: Crucial things are missing…
¤ Where and how well do we implement an algorithm?
n Bad algorithms can run quickly with small cases on fast
machines.
¤ What is a real input instance?
n We don’t know the full range of input instances in practice
¤ How well, or badly, may an algorithm scale as problem sizes
grow to unexpected levels.
n Two algorithms perform comparably on inputs of size 100;
multiply the input size tenfold, and one will still run quickly,
while the other consumes a huge amount of time.
Complexity
6
IRIS H.-R. JIANG
Proposed Definition 1 (2/2)
¨ Q: What are we asking for?
¨ A: A concrete definition of efficiency
¤ Platform-independent
¤ Instance-independent
¤ Of predictive value w.r.t. increasing input sizes
¨ E.g., the stable matching problem
¤ Input size: N = the total size of preference lists
n n men & n women: 2n lists, each of length n
n N = 2n2
¤ Describe an algorithm at a high level
¤ Analyze its running time mathematically as a function of N
Complexity
7
IRIS H.-R. JIANG
Proposed Definition 2 (1/2)
¨ We will focus on the worst-case running time
¤ A bound on the largest possible running time the algorithm could
have over all inputs of a given size N.
¨ Q: How about average-case analysis?
¤ The performance averaged over random instances
¨ A: But, how to generate a random input?
¤ We don’t know the full range of input instances
n Good on one; poor on another
¨ Q: On what can we say a running-time bound is good based?
¨ A: Compare with brute-force search over the solution space
¤ Try all possibilities and see if any one of them works.
¤ One thing in common: a compact representation implicitly
specifies a giant search space Þ brute-force is too slow!
n Typically takes 2N time or worse for inputs of size N.
n Too naïve to be acceptable in practice.
Complexity
8
IRIS H.-R. JIANG
Proposed Definition 2 (2/2)
¨ Proposed definition 2: An algorithm is efficient if it achieves
qualitatively better worst-case performance, at an analytical level,
than brute-force search.
¨ Q: What is qualitatively better performance?
¨ A: We consider the actual running time of algorithms more
carefully, and try to quantify what a reasonable running time
would be.
Complexity
9
IRIS H.-R. JIANG
Proposed Definition 3 (1/3)
¨ We expect a good algorithm has a good scaling property.
¤ E.g., when the input size doubles, the algorithm should only slow
down by some constant factor C.
¨ Polynomial running time: There exists constants c > 0 and d > 0
so that on every input of size N, its running time is bounded by
cNd primitive computational steps.
¤ Proportional to Nd; the smaller d has a better scalibility.
¤ The algorithm has a polynomial running time if this running time
bound holds for some c and d.
¨ Q: What is a primitive computational step?
¨ A: Each step corresponds to…
¤ A single assembly-language instruction on a standard processor
¤ One line of a standard programming language
¤ Simple enough and regardless to input size N
Complexity
10
IRIS H.-R. JIANG
Proposed Definition 3 (2/3)
¨ Proposed definition 3: An algorithm is efficient if it has a
polynomial running time.
¨ Justification: It really works in practice!
¤ In practice, the polynomial-time algorithms that people develop
almost always have low constants and low exponents.
¤ Breaking through the exponential barrier of brute force typically
exposes some crucial structure of the problem.
¨ Exceptions
¤ Some polynomial-time algorithms do have high constants and/or
exponents, and are useless in practice.
n Although 6.02´1023´N20 is technically polynomial-time, it would
be useless in practice.
¤ Some exponential-time (or worse) algorithms are widely used
because the worst-case instances seem to be rare.
n This kind of algorithms may run quickly for average cases.
Complexity
11
Simplex method
Unix grep
IRIS H.-R. JIANG
Proposed Definition 3 (3/3)
¨ An algorithm is efficient if it has a polynomial running time.
¤ It really works in practice.
¤ The gulf between the growth rates of polynomial and exponential
functions is enormous.
¤ It allow us to ask about the existence or nonexistence of efficient
algorithms as a well-defined question.
Complexity
12
n nlog2n n2 n3 1.5n 2n n!
n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec
n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs
n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long
n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long
n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long
n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long
n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long
n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long
The running times (rounded up) of different algorithms on inputs of increasing sizes, for a
processor performing a million high-level instructions per second. (very long: > 1025 yrs)
Non-polynomial time
Polynomial time
IRIS H.-R. JIANG
Proposed Definition 2 (2/2)
¨ Proposed definition 2: An algorithm is efficient if it achieves
qualitatively better worst-case performance, at an analytical level,
than brute-force search.
¨ Q: What is qualitatively better performance?
¨ A: We consider the actual running time of algorithms more
carefully, and try to quantify what a reasonable running time
would be.
Complexity
9
IRIS H.-R. JIANG
Proposed Definition 3 (1/3)
¨ We expect a good algorithm has a good scaling property.
¤ E.g., when the input size doubles, the algorithm should only slow
down by some constant factor C.
¨ Polynomial running time: There exists constants c > 0 and d > 0
so that on every input of size N, its running time is bounded by
cNd primitive computational steps.
¤ Proportional to Nd; the smaller d has a better scalibility.
¤ The algorithm has a polynomial running time if this running time
bound holds for some c and d.
¨ Q: What is a primitive computational step?
¨ A: Each step corresponds to…
¤ A single assembly-language instruction on a standard processor
¤ One line of a standard programming language
¤ Simple enough and regardless to input size N
Complexity
10
IRIS H.-R. JIANG
Proposed Definition 3 (2/3)
¨ Proposed definition 3: An algorithm is efficient if it has a
polynomial running time.
¨ Justification: It really works in practice!
¤ In practice, the polynomial-time algorithms that people develop
almost always have low constants and low exponents.
¤ Breaking through the exponential barrier of brute force typically
exposes some crucial structure of the problem.
¨ Exceptions
¤ Some polynomial-time algorithms do have high constants and/or
exponents, and are useless in practice.
n Although 6.02´1023´N20 is technically polynomial-time, it would
be useless in practice.
¤ Some exponential-time (or worse) algorithms are widely used
because the worst-case instances seem to be rare.
n This kind of algorithms may run quickly for average cases.
Complexity
11
Simplex method
Unix grep
IRIS H.-R. JIANG
Proposed Definition 3 (3/3)
¨ An algorithm is efficient if it has a polynomial running time.
¤ It really works in practice.
¤ The gulf between the growth rates of polynomial and exponential
functions is enormous.
¤ It allow us to ask about the existence or nonexistence of efficient
algorithms as a well-defined question.
Complexity
12
n nlog2n n2 n3 1.5n 2n n!
n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec
n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs
n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long
n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long
n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long
n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long
n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long
n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long
The running times (rounded up) of different algorithms on inputs of increasing sizes, for a
processor performing a million high-level instructions per second. (very long: > 1025 yrs)
Non-polynomial time
Polynomial time
IRIS H.-R. JIANG
Proposed Definition 2 (2/2)
¨ Proposed definition 2: An algorithm is efficient if it achieves
qualitatively better worst-case performance, at an analytical level,
than brute-force search.
¨ Q: What is qualitatively better performance?
¨ A: We consider the actual running time of algorithms more
carefully, and try to quantify what a reasonable running time
would be.
Complexity
9
IRIS H.-R. JIANG
Proposed Definition 3 (1/3)
¨ We expect a good algorithm has a good scaling property.
¤ E.g., when the input size doubles, the algorithm should only slow
down by some constant factor C.
¨ Polynomial running time: There exists constants c > 0 and d > 0
so that on every input of size N, its running time is bounded by
cNd primitive computational steps.
¤ Proportional to Nd; the smaller d has a better scalibility.
¤ The algorithm has a polynomial running time if this running time
bound holds for some c and d.
¨ Q: What is a primitive computational step?
¨ A: Each step corresponds to…
¤ A single assembly-language instruction on a standard processor
¤ One line of a standard programming language
¤ Simple enough and regardless to input size N
Complexity
10
IRIS H.-R. JIANG
Proposed Definition 3 (2/3)
¨ Proposed definition 3: An algorithm is efficient if it has a
polynomial running time.
¨ Justification: It really works in practice!
¤ In practice, the polynomial-time algorithms that people develop
almost always have low constants and low exponents.
¤ Breaking through the exponential barrier of brute force typically
exposes some crucial structure of the problem.
¨ Exceptions
¤ Some polynomial-time algorithms do have high constants and/or
exponents, and are useless in practice.
n Although 6.02´1023´N20 is technically polynomial-time, it would
be useless in practice.
¤ Some exponential-time (or worse) algorithms are widely used
because the worst-case instances seem to be rare.
n This kind of algorithms may run quickly for average cases.
Complexity
11
Simplex method
Unix grep
IRIS H.-R. JIANG
Proposed Definition 3 (3/3)
¨ An algorithm is efficient if it has a polynomial running time.
¤ It really works in practice.
¤ The gulf between the growth rates of polynomial and exponential
functions is enormous.
¤ It allow us to ask about the existence or nonexistence of efficient
algorithms as a well-defined question.
Complexity
12
n nlog2n n2 n3 1.5n 2n n!
n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec
n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs
n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long
n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long
n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long
n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long
n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long
n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long
The running times (rounded up) of different algorithms on inputs of increasing sizes, for a
processor performing a million high-level instructions per second. (very long: > 1025 yrs)
Non-polynomial time
Polynomial time
IRIS H.-R. JIANG
Proposed Definition 2 (2/2)
¨ Proposed definition 2: An algorithm is efficient if it achieves
qualitatively better worst-case performance, at an analytical level,
than brute-force search.
¨ Q: What is qualitatively better performance?
¨ A: We consider the actual running time of algorithms more
carefully, and try to quantify what a reasonable running time
would be.
Complexity
9
IRIS H.-R. JIANG
Proposed Definition 3 (1/3)
¨ We expect a good algorithm has a good scaling property.
¤ E.g., when the input size doubles, the algorithm should only slow
down by some constant factor C.
¨ Polynomial running time: There exists constants c > 0 and d > 0
so that on every input of size N, its running time is bounded by
cNd primitive computational steps.
¤ Proportional to Nd; the smaller d has a better scalibility.
¤ The algorithm has a polynomial running time if this running time
bound holds for some c and d.
¨ Q: What is a primitive computational step?
¨ A: Each step corresponds to…
¤ A single assembly-language instruction on a standard processor
¤ One line of a standard programming language
¤ Simple enough and regardless to input size N
Complexity
10
IRIS H.-R. JIANG
Proposed Definition 3 (2/3)
¨ Proposed definition 3: An algorithm is efficient if it has a
polynomial running time.
¨ Justification: It really works in practice!
¤ In practice, the polynomial-time algorithms that people develop
almost always have low constants and low exponents.
¤ Breaking through the exponential barrier of brute force typically
exposes some crucial structure of the problem.
¨ Exceptions
¤ Some polynomial-time algorithms do have high constants and/or
exponents, and are useless in practice.
n Although 6.02´1023´N20 is technically polynomial-time, it would
be useless in practice.
¤ Some exponential-time (or worse) algorithms are widely used
because the worst-case instances seem to be rare.
n This kind of algorithms may run quickly for average cases.
Complexity
11
Simplex method
Unix grep
IRIS H.-R. JIANG
Proposed Definition 3 (3/3)
¨ An algorithm is efficient if it has a polynomial running time.
¤ It really works in practice.
¤ The gulf between the growth rates of polynomial and exponential
functions is enormous.
¤ It allow us to ask about the existence or nonexistence of efficient
algorithms as a well-defined question.
Complexity
12
n nlog2n n2 n3 1.5n 2n n!
n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec
n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs
n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long
n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long
n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long
n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long
n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long
n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long
The running times (rounded up) of different algorithms on inputs of increasing sizes, for a
processor performing a million high-level instructions per second. (very long: > 1025 yrs)
Non-polynomial time
Polynomial time
Intrinsic computational tractability
O, W, Q
Asymptotic Order of Growth
Complexity
13
IRIS H.-R. JIANG
Intrinsic Computational Tractability
¨ Intrinsic computational tractability: An algorithm’s worst-case
running time on inputs of size n grows at a rate that is at most
proportional to some function f(n).
¤ f(n): an upper bound of the running time of the algorithm
¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps?
¨ A: We’d like to say it grows like n2 up to constant factors
¤ Too detailed
¤ Meaningless
¤ Hard to classify its efficiency
n Our ultimate goal is to identify broad classes of algorithms that
have similar behavior.
n We’d actually like to classify running times at a coarser level of
granularity so that similarities among different algorithms, and
among different problems, show up more clearly.
Complexity
14
Insensitive to constant factors
and low-order terms
IRIS H.-R. JIANG
O, W, Q
¨ Let T(n) be a function to describe the worst-case running time
of a certain algorithm on an input of size n.
¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n).
¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n).
¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and
W(f(n)).
¤ Also, we can prove:
Let f and g be two functions that
exists and is equal to some number c > 0.
Then f(n) = Q(g(n)).
Complexity
15
)
(
)
(
lim
n
g
n
f
n ¥
®
IRIS H.-R. JIANG
Example: O, W, Q
¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false?
1. T(n) = O(n)
2. T(n) = O(n2)
3. T(n) = O(n3)
4. T(n) = W(n)
5. T(n) = W(n2)
6. T(n) = W(n3)
7. T(n) = Q(n)
8. T(n) = Q(n2)
9. T(n) = Q(n3)
¨ A:
Complexity
16
In practice, we
expect an O(.) as
tight as possible
because we seek an
intrinsic running time.
Intrinsic computational tractability
O, W, Q
Asymptotic Order of Growth
Complexity
13
IRIS H.-R. JIANG
Intrinsic Computational Tractability
¨ Intrinsic computational tractability: An algorithm’s worst-case
running time on inputs of size n grows at a rate that is at most
proportional to some function f(n).
¤ f(n): an upper bound of the running time of the algorithm
¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps?
¨ A: We’d like to say it grows like n2 up to constant factors
¤ Too detailed
¤ Meaningless
¤ Hard to classify its efficiency
n Our ultimate goal is to identify broad classes of algorithms that
have similar behavior.
n We’d actually like to classify running times at a coarser level of
granularity so that similarities among different algorithms, and
among different problems, show up more clearly.
Complexity
14
Insensitive to constant factors
and low-order terms
IRIS H.-R. JIANG
O, W, Q
¨ Let T(n) be a function to describe the worst-case running time
of a certain algorithm on an input of size n.
¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n).
¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n).
¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and
W(f(n)).
¤ Also, we can prove:
Let f and g be two functions that
exists and is equal to some number c > 0.
Then f(n) = Q(g(n)).
Complexity
15
)
(
)
(
lim
n
g
n
f
n ¥
®
IRIS H.-R. JIANG
Example: O, W, Q
¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false?
1. T(n) = O(n)
2. T(n) = O(n2)
3. T(n) = O(n3)
4. T(n) = W(n)
5. T(n) = W(n2)
6. T(n) = W(n3)
7. T(n) = Q(n)
8. T(n) = Q(n2)
9. T(n) = Q(n3)
¨ A:
Complexity
16
In practice, we
expect an O(.) as
tight as possible
because we seek an
intrinsic running time.
Intrinsic computational tractability
O, W, Q
Asymptotic Order of Growth
Complexity
13
IRIS H.-R. JIANG
Intrinsic Computational Tractability
¨ Intrinsic computational tractability: An algorithm’s worst-case
running time on inputs of size n grows at a rate that is at most
proportional to some function f(n).
¤ f(n): an upper bound of the running time of the algorithm
¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps?
¨ A: We’d like to say it grows like n2 up to constant factors
¤ Too detailed
¤ Meaningless
¤ Hard to classify its efficiency
n Our ultimate goal is to identify broad classes of algorithms that
have similar behavior.
n We’d actually like to classify running times at a coarser level of
granularity so that similarities among different algorithms, and
among different problems, show up more clearly.
Complexity
14
Insensitive to constant factors
and low-order terms
IRIS H.-R. JIANG
O, W, Q
¨ Let T(n) be a function to describe the worst-case running time
of a certain algorithm on an input of size n.
¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n).
¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n).
¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and
W(f(n)).
¤ Also, we can prove:
Let f and g be two functions that
exists and is equal to some number c > 0.
Then f(n) = Q(g(n)).
Complexity
15
)
(
)
(
lim
n
g
n
f
n ¥
®
IRIS H.-R. JIANG
Example: O, W, Q
¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false?
1. T(n) = O(n)
2. T(n) = O(n2)
3. T(n) = O(n3)
4. T(n) = W(n)
5. T(n) = W(n2)
6. T(n) = W(n3)
7. T(n) = Q(n)
8. T(n) = Q(n2)
9. T(n) = Q(n3)
¨ A:
Complexity
16
In practice, we
expect an O(.) as
tight as possible
because we seek an
intrinsic running time.
Intrinsic computational tractability
O, W, Q
Asymptotic Order of Growth
Complexity
13
IRIS H.-R. JIANG
Intrinsic Computational Tractability
¨ Intrinsic computational tractability: An algorithm’s worst-case
running time on inputs of size n grows at a rate that is at most
proportional to some function f(n).
¤ f(n): an upper bound of the running time of the algorithm
¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps?
¨ A: We’d like to say it grows like n2 up to constant factors
¤ Too detailed
¤ Meaningless
¤ Hard to classify its efficiency
n Our ultimate goal is to identify broad classes of algorithms that
have similar behavior.
n We’d actually like to classify running times at a coarser level of
granularity so that similarities among different algorithms, and
among different problems, show up more clearly.
Complexity
14
Insensitive to constant factors
and low-order terms
IRIS H.-R. JIANG
O, W, Q
¨ Let T(n) be a function to describe the worst-case running time
of a certain algorithm on an input of size n.
¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n).
¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants
c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n).
¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and
W(f(n)).
¤ Also, we can prove:
Let f and g be two functions that
exists and is equal to some number c > 0.
Then f(n) = Q(g(n)).
Complexity
15
)
(
)
(
lim
n
g
n
f
n ¥
®
IRIS H.-R. JIANG
Example: O, W, Q
¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false?
1. T(n) = O(n)
2. T(n) = O(n2)
3. T(n) = O(n3)
4. T(n) = W(n)
5. T(n) = W(n2)
6. T(n) = W(n3)
7. T(n) = Q(n)
8. T(n) = Q(n2)
9. T(n) = Q(n3)
¨ A:
Complexity
16
In practice, we
expect an O(.) as
tight as possible
because we seek an
intrinsic running time.
IRIS H.-R. JIANG
Abuse of Notation
¨ Q: Why using equality in T(n) = O(f(n))?
¤ Asymmetric:
n f(n) = 5n3; g(n) = 3n2
n f(n) = O(n3) = g(n)
n but f(n) ¹ g(n).
¤ Better notation: T(n) Î O(f(n))
n O(f(n)) forms a set
¨ Cf. “Is” in English
¤ Aristotle is a man, but a man isn’t necessarily Aristotle.
Complexity
17
IRIS H.-R. JIANG
Properties
¨ Transitivity:
If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)),
where P = O, W, or Q.
¨ Rule of sums:
f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q.
¨ Rule of products:
If f1(n) = P(g1(n)) and f2(n) = P(g2(n)),
then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q.
¨ Transpose symmetry:
f(n) = O(g(n)) iff g(n) = W(f(n)).
¨ Reflexivity:
f(n) = P(f(n)), where P = O, W , or Q.
¨ Symmetry:
f(n) = Q(g(n)) iff g(n) = Q(f(n)).
Complexity
18
IRIS H.-R. JIANG
Summary: Polynomial-Time Complexity
¨ Polynomial running time: O(p(n))
¤ p(n) is a polynomial function of input size n (p(n) = nO(1))
¤ a.k.a. polynomial-time complexity
¨ Order
¤ O(1): constant
¤ O(logn): logarithmic
¤ O(n0.5): sublinear
¤ O(n): linear
¤ O(nlogn): loglinear
¤ O(n2): quadratic
¤ O(n3): cubic
¤ O(n4): quartic
¤ O(2n): exponential
¤ O(n!): factorial
¤ O(nn)
Complexity
19
Faster
Slower
Arrays and lists
Implementing Gale-Shapley Algorithm
Complexity
20
IRIS H.-R. JIANG
Abuse of Notation
¨ Q: Why using equality in T(n) = O(f(n))?
¤ Asymmetric:
n f(n) = 5n3; g(n) = 3n2
n f(n) = O(n3) = g(n)
n but f(n) ¹ g(n).
¤ Better notation: T(n) Î O(f(n))
n O(f(n)) forms a set
¨ Cf. “Is” in English
¤ Aristotle is a man, but a man isn’t necessarily Aristotle.
Complexity
17
IRIS H.-R. JIANG
Properties
¨ Transitivity:
If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)),
where P = O, W, or Q.
¨ Rule of sums:
f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q.
¨ Rule of products:
If f1(n) = P(g1(n)) and f2(n) = P(g2(n)),
then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q.
¨ Transpose symmetry:
f(n) = O(g(n)) iff g(n) = W(f(n)).
¨ Reflexivity:
f(n) = P(f(n)), where P = O, W , or Q.
¨ Symmetry:
f(n) = Q(g(n)) iff g(n) = Q(f(n)).
Complexity
18
IRIS H.-R. JIANG
Summary: Polynomial-Time Complexity
¨ Polynomial running time: O(p(n))
¤ p(n) is a polynomial function of input size n (p(n) = nO(1))
¤ a.k.a. polynomial-time complexity
¨ Order
¤ O(1): constant
¤ O(logn): logarithmic
¤ O(n0.5): sublinear
¤ O(n): linear
¤ O(nlogn): loglinear
¤ O(n2): quadratic
¤ O(n3): cubic
¤ O(n4): quartic
¤ O(2n): exponential
¤ O(n!): factorial
¤ O(nn)
Complexity
19
Faster
Slower
Arrays and lists
Implementing Gale-Shapley Algorithm
Complexity
20
IRIS H.-R. JIANG
Abuse of Notation
¨ Q: Why using equality in T(n) = O(f(n))?
¤ Asymmetric:
n f(n) = 5n3; g(n) = 3n2
n f(n) = O(n3) = g(n)
n but f(n) ¹ g(n).
¤ Better notation: T(n) Î O(f(n))
n O(f(n)) forms a set
¨ Cf. “Is” in English
¤ Aristotle is a man, but a man isn’t necessarily Aristotle.
Complexity
17
IRIS H.-R. JIANG
Properties
¨ Transitivity:
If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)),
where P = O, W, or Q.
¨ Rule of sums:
f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q.
¨ Rule of products:
If f1(n) = P(g1(n)) and f2(n) = P(g2(n)),
then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q.
¨ Transpose symmetry:
f(n) = O(g(n)) iff g(n) = W(f(n)).
¨ Reflexivity:
f(n) = P(f(n)), where P = O, W , or Q.
¨ Symmetry:
f(n) = Q(g(n)) iff g(n) = Q(f(n)).
Complexity
18
IRIS H.-R. JIANG
Summary: Polynomial-Time Complexity
¨ Polynomial running time: O(p(n))
¤ p(n) is a polynomial function of input size n (p(n) = nO(1))
¤ a.k.a. polynomial-time complexity
¨ Order
¤ O(1): constant
¤ O(logn): logarithmic
¤ O(n0.5): sublinear
¤ O(n): linear
¤ O(nlogn): loglinear
¤ O(n2): quadratic
¤ O(n3): cubic
¤ O(n4): quartic
¤ O(2n): exponential
¤ O(n!): factorial
¤ O(nn)
Complexity
19
Faster
Slower
Arrays and lists
Implementing Gale-Shapley Algorithm
Complexity
20
IRIS H.-R. JIANG
Abuse of Notation
¨ Q: Why using equality in T(n) = O(f(n))?
¤ Asymmetric:
n f(n) = 5n3; g(n) = 3n2
n f(n) = O(n3) = g(n)
n but f(n) ¹ g(n).
¤ Better notation: T(n) Î O(f(n))
n O(f(n)) forms a set
¨ Cf. “Is” in English
¤ Aristotle is a man, but a man isn’t necessarily Aristotle.
Complexity
17
IRIS H.-R. JIANG
Properties
¨ Transitivity:
If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)),
where P = O, W, or Q.
¨ Rule of sums:
f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q.
¨ Rule of products:
If f1(n) = P(g1(n)) and f2(n) = P(g2(n)),
then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q.
¨ Transpose symmetry:
f(n) = O(g(n)) iff g(n) = W(f(n)).
¨ Reflexivity:
f(n) = P(f(n)), where P = O, W , or Q.
¨ Symmetry:
f(n) = Q(g(n)) iff g(n) = Q(f(n)).
Complexity
18
IRIS H.-R. JIANG
Summary: Polynomial-Time Complexity
¨ Polynomial running time: O(p(n))
¤ p(n) is a polynomial function of input size n (p(n) = nO(1))
¤ a.k.a. polynomial-time complexity
¨ Order
¤ O(1): constant
¤ O(logn): logarithmic
¤ O(n0.5): sublinear
¤ O(n): linear
¤ O(nlogn): loglinear
¤ O(n2): quadratic
¤ O(n3): cubic
¤ O(n4): quartic
¤ O(2n): exponential
¤ O(n!): factorial
¤ O(nn)
Complexity
19
Faster
Slower
Arrays and lists
Implementing Gale-Shapley Algorithm
Complexity
20
IRIS H.-R. JIANG
Recap: Gale-Shapley Algorithm
¨ Correctness:
¤ Termination: G-S terminates after at most n2 iterations.
¤ Perfection: Everyone gets married.
¤ Stability: The marriages are stable.
¨ Male-optimal and female-pessimal
¨ All executions yield the same matching
Complexity
21
O(n2)?
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
What to Do?
¨ Goal: Each iteration takes O(1) time and then O(n2) in total.
¤ Line 2: Identify a free man.
¤ Line 3: For a man m, identify the highest ranked woman whom he
hasn’t yet proposed.
¤ Line 4: For a woman w, decide if w is currently engaged, and if
so, identify her current partner.
¤ Line 6: For a woman w and two men m and m', decide which of m
and m' is preferred by w.
¤
Complexity
22
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
Representing Men and Women
¨ Q: How to represent men and women?
¨ A: Assume the set of men and women are both {1, …, n}.
¤ Record in two arrays
¨ Q: How to store men’s preference lists?
¨ A: Record in an nxn array or (n arrays, each of length n)
Complexity
23
Amy
Bertha
Clare
Women
1
2
3
ID
1
2
3
ID
Xavier
Yancey
Zeus
Amy
Bertha
Amy
Bertha
Amy
Bertha
Clare
Clare
Clare
Men’s Preference Profile
1
2
1
2
1
2
3
3
3
ManPref[ , ]
Men
Yancey
Xavier
Xavier
Xavier
Yancey
Yancey
Zeus
Zeus
Zeus
Women’s Preference Profile
ManPref[m, i]:
ith woman in m’s list
IRIS H.-R. JIANG
Identifying a Free Man
¨ Q: Line 2: How to identify a free man in O(1) time?
¨ A:
¤ Since the set of free men is dynamic, a static array is not good for
insertion/deletion.
¤ How about a linked list? It can be accessed in O(1) time and
allows various sizes.
n Read/insert/delete from first
Complexity
24
1 2 3 0
first
data link(pointer)
1 2 3 0
first
delete 1
insert 1
IRIS H.-R. JIANG
Recap: Gale-Shapley Algorithm
¨ Correctness:
¤ Termination: G-S terminates after at most n2 iterations.
¤ Perfection: Everyone gets married.
¤ Stability: The marriages are stable.
¨ Male-optimal and female-pessimal
¨ All executions yield the same matching
Complexity
21
O(n2)?
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
What to Do?
¨ Goal: Each iteration takes O(1) time and then O(n2) in total.
¤ Line 2: Identify a free man.
¤ Line 3: For a man m, identify the highest ranked woman whom he
hasn’t yet proposed.
¤ Line 4: For a woman w, decide if w is currently engaged, and if
so, identify her current partner.
¤ Line 6: For a woman w and two men m and m', decide which of m
and m' is preferred by w.
¤
Complexity
22
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
Representing Men and Women
¨ Q: How to represent men and women?
¨ A: Assume the set of men and women are both {1, …, n}.
¤ Record in two arrays
¨ Q: How to store men’s preference lists?
¨ A: Record in an nxn array or (n arrays, each of length n)
Complexity
23
Amy
Bertha
Clare
Women
1
2
3
ID
1
2
3
ID
Xavier
Yancey
Zeus
Amy
Bertha
Amy
Bertha
Amy
Bertha
Clare
Clare
Clare
Men’s Preference Profile
1
2
1
2
1
2
3
3
3
ManPref[ , ]
Men
Yancey
Xavier
Xavier
Xavier
Yancey
Yancey
Zeus
Zeus
Zeus
Women’s Preference Profile
ManPref[m, i]:
ith woman in m’s list
IRIS H.-R. JIANG
Identifying a Free Man
¨ Q: Line 2: How to identify a free man in O(1) time?
¨ A:
¤ Since the set of free men is dynamic, a static array is not good for
insertion/deletion.
¤ How about a linked list? It can be accessed in O(1) time and
allows various sizes.
n Read/insert/delete from first
Complexity
24
1 2 3 0
first
data link(pointer)
1 2 3 0
first
delete 1
insert 1
IRIS H.-R. JIANG
Recap: Gale-Shapley Algorithm
¨ Correctness:
¤ Termination: G-S terminates after at most n2 iterations.
¤ Perfection: Everyone gets married.
¤ Stability: The marriages are stable.
¨ Male-optimal and female-pessimal
¨ All executions yield the same matching
Complexity
21
O(n2)?
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
What to Do?
¨ Goal: Each iteration takes O(1) time and then O(n2) in total.
¤ Line 2: Identify a free man.
¤ Line 3: For a man m, identify the highest ranked woman whom he
hasn’t yet proposed.
¤ Line 4: For a woman w, decide if w is currently engaged, and if
so, identify her current partner.
¤ Line 6: For a woman w and two men m and m', decide which of m
and m' is preferred by w.
¤
Complexity
22
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
Representing Men and Women
¨ Q: How to represent men and women?
¨ A: Assume the set of men and women are both {1, …, n}.
¤ Record in two arrays
¨ Q: How to store men’s preference lists?
¨ A: Record in an nxn array or (n arrays, each of length n)
Complexity
23
Amy
Bertha
Clare
Women
1
2
3
ID
1
2
3
ID
Xavier
Yancey
Zeus
Amy
Bertha
Amy
Bertha
Amy
Bertha
Clare
Clare
Clare
Men’s Preference Profile
1
2
1
2
1
2
3
3
3
ManPref[ , ]
Men
Yancey
Xavier
Xavier
Xavier
Yancey
Yancey
Zeus
Zeus
Zeus
Women’s Preference Profile
ManPref[m, i]:
ith woman in m’s list
IRIS H.-R. JIANG
Identifying a Free Man
¨ Q: Line 2: How to identify a free man in O(1) time?
¨ A:
¤ Since the set of free men is dynamic, a static array is not good for
insertion/deletion.
¤ How about a linked list? It can be accessed in O(1) time and
allows various sizes.
n Read/insert/delete from first
Complexity
24
1 2 3 0
first
data link(pointer)
1 2 3 0
first
delete 1
insert 1
IRIS H.-R. JIANG
Recap: Gale-Shapley Algorithm
¨ Correctness:
¤ Termination: G-S terminates after at most n2 iterations.
¤ Perfection: Everyone gets married.
¤ Stability: The marriages are stable.
¨ Male-optimal and female-pessimal
¨ All executions yield the same matching
Complexity
21
O(n2)?
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
What to Do?
¨ Goal: Each iteration takes O(1) time and then O(n2) in total.
¤ Line 2: Identify a free man.
¤ Line 3: For a man m, identify the highest ranked woman whom he
hasn’t yet proposed.
¤ Line 4: For a woman w, decide if w is currently engaged, and if
so, identify her current partner.
¤ Line 6: For a woman w and two men m and m', decide which of m
and m' is preferred by w.
¤
Complexity
22
Gale-Shapley
1. initialize each person to be free
2. while (some man m is free and hasn't proposed to every woman) do
3. w = highest ranked woman in m's list to whom m has not yet proposed
4. if (w is free) then
5. (m, w) become engaged
6. else if (w prefers m to her fiancé m') then
7. (m, w) become engaged
8. m' become free
9. return the set S of engaged pairs
IRIS H.-R. JIANG
Representing Men and Women
¨ Q: How to represent men and women?
¨ A: Assume the set of men and women are both {1, …, n}.
¤ Record in two arrays
¨ Q: How to store men’s preference lists?
¨ A: Record in an nxn array or (n arrays, each of length n)
Complexity
23
Amy
Bertha
Clare
Women
1
2
3
ID
1
2
3
ID
Xavier
Yancey
Zeus
Amy
Bertha
Amy
Bertha
Amy
Bertha
Clare
Clare
Clare
Men’s Preference Profile
1
2
1
2
1
2
3
3
3
ManPref[ , ]
Men
Yancey
Xavier
Xavier
Xavier
Yancey
Yancey
Zeus
Zeus
Zeus
Women’s Preference Profile
ManPref[m, i]:
ith woman in m’s list
IRIS H.-R. JIANG
Identifying a Free Man
¨ Q: Line 2: How to identify a free man in O(1) time?
¨ A:
¤ Since the set of free men is dynamic, a static array is not good for
insertion/deletion.
¤ How about a linked list? It can be accessed in O(1) time and
allows various sizes.
n Read/insert/delete from first
Complexity
24
1 2 3 0
first
data link(pointer)
1 2 3 0
first
delete 1
insert 1
IRIS H.-R. JIANG
Whom to Propose?
¨ Q: Line 3: For a man m, how to identify the highest ranked
woman whom he hasn’t yet proposed in O(1) time?
¨ A:
¤ An extra array Next[] indicates for each man m the position of the
next woman he will propose to on his list.
n m will propose to w = ManPref[m, Next[m]]
n Next[m]++ after proposal, regardless of rejection/acceptance
Complexity
25
1
1
1
Next
Xavier
Yancey
Zeus
Men
1
2
3
ID
Initial value
1
2
1
2
1
2
3
3
3
ManPref[ , ]
2
2
3
Next
In progress
1
2
1
2
1
2
3
3
3
ManPref[ , ]
IRIS H.-R. JIANG
Identifying Her Fiancé
¨ Q: Line 4: For a woman w, how to decide if w is currently
engaged? If so, how to identify her current fiancé in O(1) time?
¨ A:
¤ An array Current[] of length n.
n w’s current fiancé = Current[w]
n Initially, Current[w] = 0 (unmatched)
Complexity
26
0
0
0
Current
Initial value
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
Amy
Bertha
Clare
Women
1
2
3
ID
Xavier
Yancey
Zeus
Men
1
2
3
ID
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
1
3
0
Current
In progress
IRIS H.-R. JIANG
Women’s Preferences (1/2)
¨ Q: Line 6: For a woman w and two men m and m', how to
decide w prefers m or m' in O(1)?
¨ A:
¤ An array WomanPref[w, i] is analogous to ManPref[m, i]
¨ Q: Does it work? Why?
¨ A:
¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time
¤ When man 1 proposes to woman 1 …
Complexity
27
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
IRIS H.-R. JIANG
Women’s Preferences (2/2)
¨ Q: Can we do better?
¨ A: Yes.
¤ Create an nxn array Ranking[ , ] at the beginning.
¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list)
¤ When man 1 proposes to woman 1, compare them in O(1) time.
¤ Construct Ranking[ , ] while parsing inputs in O(n2) time.
Complexity
28
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
3 4 5
2 6 1
Ranking[ , ]
1 2 3 4 5 6
Men
for (i=1 to n) do
Ranking[WomanPref[w, i]] = i
IRIS H.-R. JIANG
Whom to Propose?
¨ Q: Line 3: For a man m, how to identify the highest ranked
woman whom he hasn’t yet proposed in O(1) time?
¨ A:
¤ An extra array Next[] indicates for each man m the position of the
next woman he will propose to on his list.
n m will propose to w = ManPref[m, Next[m]]
n Next[m]++ after proposal, regardless of rejection/acceptance
Complexity
25
1
1
1
Next
Xavier
Yancey
Zeus
Men
1
2
3
ID
Initial value
1
2
1
2
1
2
3
3
3
ManPref[ , ]
2
2
3
Next
In progress
1
2
1
2
1
2
3
3
3
ManPref[ , ]
IRIS H.-R. JIANG
Identifying Her Fiancé
¨ Q: Line 4: For a woman w, how to decide if w is currently
engaged? If so, how to identify her current fiancé in O(1) time?
¨ A:
¤ An array Current[] of length n.
n w’s current fiancé = Current[w]
n Initially, Current[w] = 0 (unmatched)
Complexity
26
0
0
0
Current
Initial value
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
Amy
Bertha
Clare
Women
1
2
3
ID
Xavier
Yancey
Zeus
Men
1
2
3
ID
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
1
3
0
Current
In progress
IRIS H.-R. JIANG
Women’s Preferences (1/2)
¨ Q: Line 6: For a woman w and two men m and m', how to
decide w prefers m or m' in O(1)?
¨ A:
¤ An array WomanPref[w, i] is analogous to ManPref[m, i]
¨ Q: Does it work? Why?
¨ A:
¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time
¤ When man 1 proposes to woman 1 …
Complexity
27
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
IRIS H.-R. JIANG
Women’s Preferences (2/2)
¨ Q: Can we do better?
¨ A: Yes.
¤ Create an nxn array Ranking[ , ] at the beginning.
¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list)
¤ When man 1 proposes to woman 1, compare them in O(1) time.
¤ Construct Ranking[ , ] while parsing inputs in O(n2) time.
Complexity
28
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
3 4 5
2 6 1
Ranking[ , ]
1 2 3 4 5 6
Men
for (i=1 to n) do
Ranking[WomanPref[w, i]] = i
IRIS H.-R. JIANG
Whom to Propose?
¨ Q: Line 3: For a man m, how to identify the highest ranked
woman whom he hasn’t yet proposed in O(1) time?
¨ A:
¤ An extra array Next[] indicates for each man m the position of the
next woman he will propose to on his list.
n m will propose to w = ManPref[m, Next[m]]
n Next[m]++ after proposal, regardless of rejection/acceptance
Complexity
25
1
1
1
Next
Xavier
Yancey
Zeus
Men
1
2
3
ID
Initial value
1
2
1
2
1
2
3
3
3
ManPref[ , ]
2
2
3
Next
In progress
1
2
1
2
1
2
3
3
3
ManPref[ , ]
IRIS H.-R. JIANG
Identifying Her Fiancé
¨ Q: Line 4: For a woman w, how to decide if w is currently
engaged? If so, how to identify her current fiancé in O(1) time?
¨ A:
¤ An array Current[] of length n.
n w’s current fiancé = Current[w]
n Initially, Current[w] = 0 (unmatched)
Complexity
26
0
0
0
Current
Initial value
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
Amy
Bertha
Clare
Women
1
2
3
ID
Xavier
Yancey
Zeus
Men
1
2
3
ID
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
1
3
0
Current
In progress
IRIS H.-R. JIANG
Women’s Preferences (1/2)
¨ Q: Line 6: For a woman w and two men m and m', how to
decide w prefers m or m' in O(1)?
¨ A:
¤ An array WomanPref[w, i] is analogous to ManPref[m, i]
¨ Q: Does it work? Why?
¨ A:
¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time
¤ When man 1 proposes to woman 1 …
Complexity
27
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
IRIS H.-R. JIANG
Women’s Preferences (2/2)
¨ Q: Can we do better?
¨ A: Yes.
¤ Create an nxn array Ranking[ , ] at the beginning.
¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list)
¤ When man 1 proposes to woman 1, compare them in O(1) time.
¤ Construct Ranking[ , ] while parsing inputs in O(n2) time.
Complexity
28
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
3 4 5
2 6 1
Ranking[ , ]
1 2 3 4 5 6
Men
for (i=1 to n) do
Ranking[WomanPref[w, i]] = i
IRIS H.-R. JIANG
Whom to Propose?
¨ Q: Line 3: For a man m, how to identify the highest ranked
woman whom he hasn’t yet proposed in O(1) time?
¨ A:
¤ An extra array Next[] indicates for each man m the position of the
next woman he will propose to on his list.
n m will propose to w = ManPref[m, Next[m]]
n Next[m]++ after proposal, regardless of rejection/acceptance
Complexity
25
1
1
1
Next
Xavier
Yancey
Zeus
Men
1
2
3
ID
Initial value
1
2
1
2
1
2
3
3
3
ManPref[ , ]
2
2
3
Next
In progress
1
2
1
2
1
2
3
3
3
ManPref[ , ]
IRIS H.-R. JIANG
Identifying Her Fiancé
¨ Q: Line 4: For a woman w, how to decide if w is currently
engaged? If so, how to identify her current fiancé in O(1) time?
¨ A:
¤ An array Current[] of length n.
n w’s current fiancé = Current[w]
n Initially, Current[w] = 0 (unmatched)
Complexity
26
0
0
0
Current
Initial value
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
Amy
Bertha
Clare
Women
1
2
3
ID
Xavier
Yancey
Zeus
Men
1
2
3
ID
2
1
1
1
2
2
3
3
3
WomanPref[ , ]
1
3
0
Current
In progress
IRIS H.-R. JIANG
Women’s Preferences (1/2)
¨ Q: Line 6: For a woman w and two men m and m', how to
decide w prefers m or m' in O(1)?
¨ A:
¤ An array WomanPref[w, i] is analogous to ManPref[m, i]
¨ Q: Does it work? Why?
¨ A:
¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time
¤ When man 1 proposes to woman 1 …
Complexity
27
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
IRIS H.-R. JIANG
Women’s Preferences (2/2)
¨ Q: Can we do better?
¨ A: Yes.
¤ Create an nxn array Ranking[ , ] at the beginning.
¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list)
¤ When man 1 proposes to woman 1, compare them in O(1) time.
¤ Construct Ranking[ , ] while parsing inputs in O(n2) time.
Complexity
28
Amy
Women
1
ID
6
Current
5 6 2
3 1 4
WomanPref[ , ]
1st 2nd 3rd 4th 5th 6th
3 4 5
2 6 1
Ranking[ , ]
1 2 3 4 5 6
Men
for (i=1 to n) do
Ranking[WomanPref[w, i]] = i
IRIS H.-R. JIANG
What Did We Learn?
¨ The meaning of efficiency
¤ Efficiency Û polynomial-time complexity
¨ The analysis of an algorithm
¤ We try to figure out the intrinsic running time before
implementing.
¤ Data structures play an important role; if necessary, preprocess
the data into a proper form.
Complexity
29
IRIS H.-R. JIANG
Running Times Comparison
¨ Q: Arrange the following list of functions in ascending order of
growth rate.
¤ f1(n) = n1/3
¤ f2(n) = lg n (lg = log2)
¤ f3(n) = 2!lg n
¨ A: Convert them into a common form:
¤ Take logarithm!
¤ Let z = lg n
¤ lg f1(n) = lg n1/3 = (1/3)log n = (1/3)z
¤ lg f2(n) = lg (lg n) = lg z
¤ lg f3(n) = lg (2!lg n) = !lg n = !z
¤ DIY the rest! (see Solved Exercise 1)
Complexity
30
IRIS H.-R. JIANG
What Did We Learn?
¨ The meaning of efficiency
¤ Efficiency Û polynomial-time complexity
¨ The analysis of an algorithm
¤ We try to figure out the intrinsic running time before
implementing.
¤ Data structures play an important role; if necessary, preprocess
the data into a proper form.
Complexity
29
IRIS H.-R. JIANG
Running Times Comparison
¨ Q: Arrange the following list of functions in ascending order of
growth rate.
¤ f1(n) = n1/3
¤ f2(n) = lg n (lg = log2)
¤ f3(n) = 2!lg n
¨ A: Convert them into a common form:
¤ Take logarithm!
¤ Let z = lg n
¤ lg f1(n) = lg n1/3 = (1/3)log n = (1/3)z
¤ lg f2(n) = lg (lg n) = lg z
¤ lg f3(n) = lg (2!lg n) = !lg n = !z
¤ DIY the rest! (see Solved Exercise 1)
Complexity
30

Más contenido relacionado

La actualidad más candente

Dijkstra s algorithm
Dijkstra s algorithmDijkstra s algorithm
Dijkstra s algorithmmansab MIRZA
 
Valid &amp; invalid arguments
Valid &amp; invalid argumentsValid &amp; invalid arguments
Valid &amp; invalid argumentsAbdur Rehman
 
Backtracking Algorithm.ppt
Backtracking Algorithm.pptBacktracking Algorithm.ppt
Backtracking Algorithm.pptSalmIbrahimIlyas
 
Floyd Warshall algorithm easy way to compute - Malinga
Floyd Warshall algorithm easy way to compute - MalingaFloyd Warshall algorithm easy way to compute - Malinga
Floyd Warshall algorithm easy way to compute - MalingaMalinga Perera
 
Algorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIAlgorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIMohamed Loey
 
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...Animesh Chaturvedi
 
Presentation on Heap Sort
Presentation on Heap Sort Presentation on Heap Sort
Presentation on Heap Sort Amit Kundu
 
Merge sort analysis and its real time applications
Merge sort analysis and its real time applicationsMerge sort analysis and its real time applications
Merge sort analysis and its real time applicationsyazad dumasia
 
Koningsberg bridge problem
Koningsberg  bridge  problemKoningsberg  bridge  problem
Koningsberg bridge problemSudiksha Joshi
 
Area Under the Curve
Area Under the CurveArea Under the Curve
Area Under the Curvealexbeja
 
L03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicL03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicManjula V
 

La actualidad más candente (20)

Dijkstra s algorithm
Dijkstra s algorithmDijkstra s algorithm
Dijkstra s algorithm
 
Graphs
GraphsGraphs
Graphs
 
(floyd's algm)
(floyd's algm)(floyd's algm)
(floyd's algm)
 
Valid &amp; invalid arguments
Valid &amp; invalid argumentsValid &amp; invalid arguments
Valid &amp; invalid arguments
 
Bellman ford algorithm
Bellman ford algorithmBellman ford algorithm
Bellman ford algorithm
 
Backtracking Algorithm.ppt
Backtracking Algorithm.pptBacktracking Algorithm.ppt
Backtracking Algorithm.ppt
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programming
 
Floyd Warshall algorithm easy way to compute - Malinga
Floyd Warshall algorithm easy way to compute - MalingaFloyd Warshall algorithm easy way to compute - Malinga
Floyd Warshall algorithm easy way to compute - Malinga
 
Continuity.ppt
Continuity.pptContinuity.ppt
Continuity.ppt
 
Algorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms IIAlgorithms Lecture 3: Analysis of Algorithms II
Algorithms Lecture 3: Analysis of Algorithms II
 
Quick sort
Quick sortQuick sort
Quick sort
 
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
Shortest path, Bellman-Ford's algorithm, Dijkastra's algorithm, their Java co...
 
Planar graph
Planar graphPlanar graph
Planar graph
 
Presentation on Heap Sort
Presentation on Heap Sort Presentation on Heap Sort
Presentation on Heap Sort
 
Merge sort analysis and its real time applications
Merge sort analysis and its real time applicationsMerge sort analysis and its real time applications
Merge sort analysis and its real time applications
 
Quick sort
Quick sortQuick sort
Quick sort
 
Koningsberg bridge problem
Koningsberg  bridge  problemKoningsberg  bridge  problem
Koningsberg bridge problem
 
Radix Sort
Radix SortRadix Sort
Radix Sort
 
Area Under the Curve
Area Under the CurveArea Under the Curve
Area Under the Curve
 
L03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicL03 ai - knowledge representation using logic
L03 ai - knowledge representation using logic
 

Similar a algorithm_2algorithm_analysis.pdf

Analysis of Algorithms
Analysis of AlgorithmsAnalysis of Algorithms
Analysis of AlgorithmsAmna Saeed
 
Design & Analysis of Algorithms Lecture Notes
Design & Analysis of Algorithms Lecture NotesDesign & Analysis of Algorithms Lecture Notes
Design & Analysis of Algorithms Lecture NotesFellowBuddy.com
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxSyed Zaid Irshad
 
Design and analysis of algorithms - Abstract View
Design and analysis of algorithms - Abstract ViewDesign and analysis of algorithms - Abstract View
Design and analysis of algorithms - Abstract ViewWaqas Nawaz
 
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.Shakil Ahmed
 
Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3muayyad alsadi
 
Refactoring Applications for the XK7 and Future Hybrid Architectures
Refactoring Applications for the XK7 and Future Hybrid ArchitecturesRefactoring Applications for the XK7 and Future Hybrid Architectures
Refactoring Applications for the XK7 and Future Hybrid ArchitecturesJeff Larkin
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...AntareepMajumder
 
Optimization of power systems - old and new tools
Optimization of power systems - old and new toolsOptimization of power systems - old and new tools
Optimization of power systems - old and new toolsOlivier Teytaud
 
Tools for Discrete Time Control; Application to Power Systems
Tools for Discrete Time Control; Application to Power SystemsTools for Discrete Time Control; Application to Power Systems
Tools for Discrete Time Control; Application to Power SystemsOlivier Teytaud
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdfNayanChandak1
 
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
Using SigOpt to Tune Deep Learning Models with Nervana CloudUsing SigOpt to Tune Deep Learning Models with Nervana Cloud
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
 
VCE Unit 01 (1).pptx
VCE Unit 01 (1).pptxVCE Unit 01 (1).pptx
VCE Unit 01 (1).pptxskilljiolms
 

Similar a algorithm_2algorithm_analysis.pdf (20)

Analysis of Algorithms
Analysis of AlgorithmsAnalysis of Algorithms
Analysis of Algorithms
 
Daa
DaaDaa
Daa
 
Design & Analysis of Algorithms Lecture Notes
Design & Analysis of Algorithms Lecture NotesDesign & Analysis of Algorithms Lecture Notes
Design & Analysis of Algorithms Lecture Notes
 
Design and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptxDesign and Analysis of Algorithms.pptx
Design and Analysis of Algorithms.pptx
 
Design and analysis of algorithms - Abstract View
Design and analysis of algorithms - Abstract ViewDesign and analysis of algorithms - Abstract View
Design and analysis of algorithms - Abstract View
 
DAA Notes.pdf
DAA Notes.pdfDAA Notes.pdf
DAA Notes.pdf
 
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.
Performance evaluation of parallel bubble sort algorithm on supercomputer IMAN1.
 
Analysis
AnalysisAnalysis
Analysis
 
Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3Accelerating stochastic gradient descent using adaptive mini batch size3
Accelerating stochastic gradient descent using adaptive mini batch size3
 
Python algorithm
Python algorithmPython algorithm
Python algorithm
 
Refactoring Applications for the XK7 and Future Hybrid Architectures
Refactoring Applications for the XK7 and Future Hybrid ArchitecturesRefactoring Applications for the XK7 and Future Hybrid Architectures
Refactoring Applications for the XK7 and Future Hybrid Architectures
 
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
FALLSEM2022-23_BCSE202L_TH_VL2022230103292_Reference_Material_I_25-07-2022_Fu...
 
Optimization of power systems - old and new tools
Optimization of power systems - old and new toolsOptimization of power systems - old and new tools
Optimization of power systems - old and new tools
 
Tools for Discrete Time Control; Application to Power Systems
Tools for Discrete Time Control; Application to Power SystemsTools for Discrete Time Control; Application to Power Systems
Tools for Discrete Time Control; Application to Power Systems
 
Algorithm Analysis.pdf
Algorithm Analysis.pdfAlgorithm Analysis.pdf
Algorithm Analysis.pdf
 
daa unit 1.pptx
daa unit 1.pptxdaa unit 1.pptx
daa unit 1.pptx
 
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
Using SigOpt to Tune Deep Learning Models with Nervana CloudUsing SigOpt to Tune Deep Learning Models with Nervana Cloud
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
 
VCE Unit 01 (1).pptx
VCE Unit 01 (1).pptxVCE Unit 01 (1).pptx
VCE Unit 01 (1).pptx
 
DATA STRUCTURE.pdf
DATA STRUCTURE.pdfDATA STRUCTURE.pdf
DATA STRUCTURE.pdf
 
DATA STRUCTURE
DATA STRUCTUREDATA STRUCTURE
DATA STRUCTURE
 

Más de HsuChi Chen

algorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdfalgorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdfHsuChi Chen
 
algorithm_1introdunction.pdf
algorithm_1introdunction.pdfalgorithm_1introdunction.pdf
algorithm_1introdunction.pdfHsuChi Chen
 
algorithm_9linear_programming.pdf
algorithm_9linear_programming.pdfalgorithm_9linear_programming.pdf
algorithm_9linear_programming.pdfHsuChi Chen
 
algorithm_4greedy_algorithms.pdf
algorithm_4greedy_algorithms.pdfalgorithm_4greedy_algorithms.pdf
algorithm_4greedy_algorithms.pdfHsuChi Chen
 
algorithm_3graphs.pdf
algorithm_3graphs.pdfalgorithm_3graphs.pdf
algorithm_3graphs.pdfHsuChi Chen
 
algorithm_7network_flow.pdf
algorithm_7network_flow.pdfalgorithm_7network_flow.pdf
algorithm_7network_flow.pdfHsuChi Chen
 
algorithm_8beyond_polynomial_running_times.pdf
algorithm_8beyond_polynomial_running_times.pdfalgorithm_8beyond_polynomial_running_times.pdf
algorithm_8beyond_polynomial_running_times.pdfHsuChi Chen
 
algorithm_5divide_and_conquer.pdf
algorithm_5divide_and_conquer.pdfalgorithm_5divide_and_conquer.pdf
algorithm_5divide_and_conquer.pdfHsuChi Chen
 
algorithm_0introdunction.pdf
algorithm_0introdunction.pdfalgorithm_0introdunction.pdf
algorithm_0introdunction.pdfHsuChi Chen
 
期末專題報告書
期末專題報告書期末專題報告書
期末專題報告書HsuChi Chen
 
單晶片期末專題-報告二
單晶片期末專題-報告二單晶片期末專題-報告二
單晶片期末專題-報告二HsuChi Chen
 
單晶片期末專題-報告一
單晶片期末專題-報告一單晶片期末專題-報告一
單晶片期末專題-報告一HsuChi Chen
 
模組化課程生醫微製程期末報告
模組化課程生醫微製程期末報告模組化課程生醫微製程期末報告
模組化課程生醫微製程期末報告HsuChi Chen
 
單晶片期末專題-初步想法
單晶片期末專題-初步想法單晶片期末專題-初步想法
單晶片期末專題-初步想法HsuChi Chen
 

Más de HsuChi Chen (14)

algorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdfalgorithm_6dynamic_programming.pdf
algorithm_6dynamic_programming.pdf
 
algorithm_1introdunction.pdf
algorithm_1introdunction.pdfalgorithm_1introdunction.pdf
algorithm_1introdunction.pdf
 
algorithm_9linear_programming.pdf
algorithm_9linear_programming.pdfalgorithm_9linear_programming.pdf
algorithm_9linear_programming.pdf
 
algorithm_4greedy_algorithms.pdf
algorithm_4greedy_algorithms.pdfalgorithm_4greedy_algorithms.pdf
algorithm_4greedy_algorithms.pdf
 
algorithm_3graphs.pdf
algorithm_3graphs.pdfalgorithm_3graphs.pdf
algorithm_3graphs.pdf
 
algorithm_7network_flow.pdf
algorithm_7network_flow.pdfalgorithm_7network_flow.pdf
algorithm_7network_flow.pdf
 
algorithm_8beyond_polynomial_running_times.pdf
algorithm_8beyond_polynomial_running_times.pdfalgorithm_8beyond_polynomial_running_times.pdf
algorithm_8beyond_polynomial_running_times.pdf
 
algorithm_5divide_and_conquer.pdf
algorithm_5divide_and_conquer.pdfalgorithm_5divide_and_conquer.pdf
algorithm_5divide_and_conquer.pdf
 
algorithm_0introdunction.pdf
algorithm_0introdunction.pdfalgorithm_0introdunction.pdf
algorithm_0introdunction.pdf
 
期末專題報告書
期末專題報告書期末專題報告書
期末專題報告書
 
單晶片期末專題-報告二
單晶片期末專題-報告二單晶片期末專題-報告二
單晶片期末專題-報告二
 
單晶片期末專題-報告一
單晶片期末專題-報告一單晶片期末專題-報告一
單晶片期末專題-報告一
 
模組化課程生醫微製程期末報告
模組化課程生醫微製程期末報告模組化課程生醫微製程期末報告
模組化課程生醫微製程期末報告
 
單晶片期末專題-初步想法
單晶片期末專題-初步想法單晶片期末專題-初步想法
單晶片期末專題-初步想法
 

Último

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfkalichargn70th171
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataBradBedford3
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEOrtus Solutions, Corp
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio, Inc.
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - InfographicHr365.us smith
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)OPEN KNOWLEDGE GmbH
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...OnePlan Solutions
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about usDynamic Netsoft
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...kellynguyen01
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝soniya singh
 

Último (20)

The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdfThe Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
The Essentials of Digital Experience Monitoring_ A Comprehensive Guide.pdf
 
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer DataAdobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
Adobe Marketo Engage Deep Dives: Using Webhooks to Transfer Data
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASEBATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
BATTLEFIELD ORM: TIPS, TACTICS AND STRATEGIES FOR CONQUERING YOUR DATABASE
 
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed DataAlluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
Alluxio Monthly Webinar | Cloud-Native Model Training on Distributed Data
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
Asset Management Software - Infographic
Asset Management Software - InfographicAsset Management Software - Infographic
Asset Management Software - Infographic
 
Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)Der Spagat zwischen BIAS und FAIRNESS (2024)
Der Spagat zwischen BIAS und FAIRNESS (2024)
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...Advancing Engineering with AI through the Next Generation of Strategic Projec...
Advancing Engineering with AI through the Next Generation of Strategic Projec...
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about us
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Naraina Delhi 💯Call Us 🔝8264348440🔝
 

algorithm_2algorithm_analysis.pdf

  • 1. Iris Hui-Ru Jiang Fall 2014 CHAPTER 2 ALGORITHM ANALYSIS IRIS H.-R. JIANG Outline ¨ Content: ¤ Computational tractability ¤ Asymptotic order of growth ¤ Implementing Gale-Shapley algorithm ¤ Survey on common running times (Self-study) ¤ Priority queues (Postponed to Section 4.5) ¨ Reading: ¤ Chapter 2 Complexity 2 IRIS H.-R. JIANG Computational Efficiency ¨ Q: What is a good algorithm? ¨ A: ¤ Correct: proofs ¤ Efficient: run quickly and consume small memory n Efficiency Û resource requirement Û complexity ¨ Q: How does the resource requirements scale with increasing input size? ¤ Time: running time ¤ Space: memory usage Complexity 3 IRIS H.-R. JIANG Running Time Complexity Analytical Engine As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise — by what course of calculation can these results be arrived at by the machine in the shortest time? -- Charles Babbage (1864) how many times do you have to turn the crank? Charles Babbage 4
  • 2. Iris Hui-Ru Jiang Fall 2014 CHAPTER 2 ALGORITHM ANALYSIS IRIS H.-R. JIANG Outline ¨ Content: ¤ Computational tractability ¤ Asymptotic order of growth ¤ Implementing Gale-Shapley algorithm ¤ Survey on common running times (Self-study) ¤ Priority queues (Postponed to Section 4.5) ¨ Reading: ¤ Chapter 2 Complexity 2 IRIS H.-R. JIANG Computational Efficiency ¨ Q: What is a good algorithm? ¨ A: ¤ Correct: proofs ¤ Efficient: run quickly and consume small memory n Efficiency Û resource requirement Û complexity ¨ Q: How does the resource requirements scale with increasing input size? ¤ Time: running time ¤ Space: memory usage Complexity 3 IRIS H.-R. JIANG Running Time Complexity Analytical Engine As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise — by what course of calculation can these results be arrived at by the machine in the shortest time? -- Charles Babbage (1864) how many times do you have to turn the crank? Charles Babbage 4
  • 3. Iris Hui-Ru Jiang Fall 2014 CHAPTER 2 ALGORITHM ANALYSIS IRIS H.-R. JIANG Outline ¨ Content: ¤ Computational tractability ¤ Asymptotic order of growth ¤ Implementing Gale-Shapley algorithm ¤ Survey on common running times (Self-study) ¤ Priority queues (Postponed to Section 4.5) ¨ Reading: ¤ Chapter 2 Complexity 2 IRIS H.-R. JIANG Computational Efficiency ¨ Q: What is a good algorithm? ¨ A: ¤ Correct: proofs ¤ Efficient: run quickly and consume small memory n Efficiency Û resource requirement Û complexity ¨ Q: How does the resource requirements scale with increasing input size? ¤ Time: running time ¤ Space: memory usage Complexity 3 IRIS H.-R. JIANG Running Time Complexity Analytical Engine As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise — by what course of calculation can these results be arrived at by the machine in the shortest time? -- Charles Babbage (1864) how many times do you have to turn the crank? Charles Babbage 4
  • 4. Iris Hui-Ru Jiang Fall 2014 CHAPTER 2 ALGORITHM ANALYSIS IRIS H.-R. JIANG Outline ¨ Content: ¤ Computational tractability ¤ Asymptotic order of growth ¤ Implementing Gale-Shapley algorithm ¤ Survey on common running times (Self-study) ¤ Priority queues (Postponed to Section 4.5) ¨ Reading: ¤ Chapter 2 Complexity 2 IRIS H.-R. JIANG Computational Efficiency ¨ Q: What is a good algorithm? ¨ A: ¤ Correct: proofs ¤ Efficient: run quickly and consume small memory n Efficiency Û resource requirement Û complexity ¨ Q: How does the resource requirements scale with increasing input size? ¤ Time: running time ¤ Space: memory usage Complexity 3 IRIS H.-R. JIANG Running Time Complexity Analytical Engine As soon as an Analytical Engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will then arise — by what course of calculation can these results be arrived at by the machine in the shortest time? -- Charles Babbage (1864) how many times do you have to turn the crank? Charles Babbage 4
  • 5. Defining efficiency Computational Tractability Complexity 5 IRIS H.-R. JIANG Proposed Definition 1 (1/2) ¨ Q: How should we turn the fuzzy notion of an “efficient” algorithm into something more concrete? ¨ A: Proposed definition 1: An algorithm is efficient if, when implemented, it runs quickly on real input instances. ¨ Q: What’s wrong? ¨ A: Crucial things are missing… ¤ Where and how well do we implement an algorithm? n Bad algorithms can run quickly with small cases on fast machines. ¤ What is a real input instance? n We don’t know the full range of input instances in practice ¤ How well, or badly, may an algorithm scale as problem sizes grow to unexpected levels. n Two algorithms perform comparably on inputs of size 100; multiply the input size tenfold, and one will still run quickly, while the other consumes a huge amount of time. Complexity 6 IRIS H.-R. JIANG Proposed Definition 1 (2/2) ¨ Q: What are we asking for? ¨ A: A concrete definition of efficiency ¤ Platform-independent ¤ Instance-independent ¤ Of predictive value w.r.t. increasing input sizes ¨ E.g., the stable matching problem ¤ Input size: N = the total size of preference lists n n men & n women: 2n lists, each of length n n N = 2n2 ¤ Describe an algorithm at a high level ¤ Analyze its running time mathematically as a function of N Complexity 7 IRIS H.-R. JIANG Proposed Definition 2 (1/2) ¨ We will focus on the worst-case running time ¤ A bound on the largest possible running time the algorithm could have over all inputs of a given size N. ¨ Q: How about average-case analysis? ¤ The performance averaged over random instances ¨ A: But, how to generate a random input? ¤ We don’t know the full range of input instances n Good on one; poor on another ¨ Q: On what can we say a running-time bound is good based? ¨ A: Compare with brute-force search over the solution space ¤ Try all possibilities and see if any one of them works. ¤ One thing in common: a compact representation implicitly specifies a giant search space Þ brute-force is too slow! n Typically takes 2N time or worse for inputs of size N. n Too naïve to be acceptable in practice. Complexity 8
  • 6. Defining efficiency Computational Tractability Complexity 5 IRIS H.-R. JIANG Proposed Definition 1 (1/2) ¨ Q: How should we turn the fuzzy notion of an “efficient” algorithm into something more concrete? ¨ A: Proposed definition 1: An algorithm is efficient if, when implemented, it runs quickly on real input instances. ¨ Q: What’s wrong? ¨ A: Crucial things are missing… ¤ Where and how well do we implement an algorithm? n Bad algorithms can run quickly with small cases on fast machines. ¤ What is a real input instance? n We don’t know the full range of input instances in practice ¤ How well, or badly, may an algorithm scale as problem sizes grow to unexpected levels. n Two algorithms perform comparably on inputs of size 100; multiply the input size tenfold, and one will still run quickly, while the other consumes a huge amount of time. Complexity 6 IRIS H.-R. JIANG Proposed Definition 1 (2/2) ¨ Q: What are we asking for? ¨ A: A concrete definition of efficiency ¤ Platform-independent ¤ Instance-independent ¤ Of predictive value w.r.t. increasing input sizes ¨ E.g., the stable matching problem ¤ Input size: N = the total size of preference lists n n men & n women: 2n lists, each of length n n N = 2n2 ¤ Describe an algorithm at a high level ¤ Analyze its running time mathematically as a function of N Complexity 7 IRIS H.-R. JIANG Proposed Definition 2 (1/2) ¨ We will focus on the worst-case running time ¤ A bound on the largest possible running time the algorithm could have over all inputs of a given size N. ¨ Q: How about average-case analysis? ¤ The performance averaged over random instances ¨ A: But, how to generate a random input? ¤ We don’t know the full range of input instances n Good on one; poor on another ¨ Q: On what can we say a running-time bound is good based? ¨ A: Compare with brute-force search over the solution space ¤ Try all possibilities and see if any one of them works. ¤ One thing in common: a compact representation implicitly specifies a giant search space Þ brute-force is too slow! n Typically takes 2N time or worse for inputs of size N. n Too naïve to be acceptable in practice. Complexity 8
  • 7. Defining efficiency Computational Tractability Complexity 5 IRIS H.-R. JIANG Proposed Definition 1 (1/2) ¨ Q: How should we turn the fuzzy notion of an “efficient” algorithm into something more concrete? ¨ A: Proposed definition 1: An algorithm is efficient if, when implemented, it runs quickly on real input instances. ¨ Q: What’s wrong? ¨ A: Crucial things are missing… ¤ Where and how well do we implement an algorithm? n Bad algorithms can run quickly with small cases on fast machines. ¤ What is a real input instance? n We don’t know the full range of input instances in practice ¤ How well, or badly, may an algorithm scale as problem sizes grow to unexpected levels. n Two algorithms perform comparably on inputs of size 100; multiply the input size tenfold, and one will still run quickly, while the other consumes a huge amount of time. Complexity 6 IRIS H.-R. JIANG Proposed Definition 1 (2/2) ¨ Q: What are we asking for? ¨ A: A concrete definition of efficiency ¤ Platform-independent ¤ Instance-independent ¤ Of predictive value w.r.t. increasing input sizes ¨ E.g., the stable matching problem ¤ Input size: N = the total size of preference lists n n men & n women: 2n lists, each of length n n N = 2n2 ¤ Describe an algorithm at a high level ¤ Analyze its running time mathematically as a function of N Complexity 7 IRIS H.-R. JIANG Proposed Definition 2 (1/2) ¨ We will focus on the worst-case running time ¤ A bound on the largest possible running time the algorithm could have over all inputs of a given size N. ¨ Q: How about average-case analysis? ¤ The performance averaged over random instances ¨ A: But, how to generate a random input? ¤ We don’t know the full range of input instances n Good on one; poor on another ¨ Q: On what can we say a running-time bound is good based? ¨ A: Compare with brute-force search over the solution space ¤ Try all possibilities and see if any one of them works. ¤ One thing in common: a compact representation implicitly specifies a giant search space Þ brute-force is too slow! n Typically takes 2N time or worse for inputs of size N. n Too naïve to be acceptable in practice. Complexity 8
  • 8. Defining efficiency Computational Tractability Complexity 5 IRIS H.-R. JIANG Proposed Definition 1 (1/2) ¨ Q: How should we turn the fuzzy notion of an “efficient” algorithm into something more concrete? ¨ A: Proposed definition 1: An algorithm is efficient if, when implemented, it runs quickly on real input instances. ¨ Q: What’s wrong? ¨ A: Crucial things are missing… ¤ Where and how well do we implement an algorithm? n Bad algorithms can run quickly with small cases on fast machines. ¤ What is a real input instance? n We don’t know the full range of input instances in practice ¤ How well, or badly, may an algorithm scale as problem sizes grow to unexpected levels. n Two algorithms perform comparably on inputs of size 100; multiply the input size tenfold, and one will still run quickly, while the other consumes a huge amount of time. Complexity 6 IRIS H.-R. JIANG Proposed Definition 1 (2/2) ¨ Q: What are we asking for? ¨ A: A concrete definition of efficiency ¤ Platform-independent ¤ Instance-independent ¤ Of predictive value w.r.t. increasing input sizes ¨ E.g., the stable matching problem ¤ Input size: N = the total size of preference lists n n men & n women: 2n lists, each of length n n N = 2n2 ¤ Describe an algorithm at a high level ¤ Analyze its running time mathematically as a function of N Complexity 7 IRIS H.-R. JIANG Proposed Definition 2 (1/2) ¨ We will focus on the worst-case running time ¤ A bound on the largest possible running time the algorithm could have over all inputs of a given size N. ¨ Q: How about average-case analysis? ¤ The performance averaged over random instances ¨ A: But, how to generate a random input? ¤ We don’t know the full range of input instances n Good on one; poor on another ¨ Q: On what can we say a running-time bound is good based? ¨ A: Compare with brute-force search over the solution space ¤ Try all possibilities and see if any one of them works. ¤ One thing in common: a compact representation implicitly specifies a giant search space Þ brute-force is too slow! n Typically takes 2N time or worse for inputs of size N. n Too naïve to be acceptable in practice. Complexity 8
  • 9. IRIS H.-R. JIANG Proposed Definition 2 (2/2) ¨ Proposed definition 2: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. ¨ Q: What is qualitatively better performance? ¨ A: We consider the actual running time of algorithms more carefully, and try to quantify what a reasonable running time would be. Complexity 9 IRIS H.-R. JIANG Proposed Definition 3 (1/3) ¨ We expect a good algorithm has a good scaling property. ¤ E.g., when the input size doubles, the algorithm should only slow down by some constant factor C. ¨ Polynomial running time: There exists constants c > 0 and d > 0 so that on every input of size N, its running time is bounded by cNd primitive computational steps. ¤ Proportional to Nd; the smaller d has a better scalibility. ¤ The algorithm has a polynomial running time if this running time bound holds for some c and d. ¨ Q: What is a primitive computational step? ¨ A: Each step corresponds to… ¤ A single assembly-language instruction on a standard processor ¤ One line of a standard programming language ¤ Simple enough and regardless to input size N Complexity 10 IRIS H.-R. JIANG Proposed Definition 3 (2/3) ¨ Proposed definition 3: An algorithm is efficient if it has a polynomial running time. ¨ Justification: It really works in practice! ¤ In practice, the polynomial-time algorithms that people develop almost always have low constants and low exponents. ¤ Breaking through the exponential barrier of brute force typically exposes some crucial structure of the problem. ¨ Exceptions ¤ Some polynomial-time algorithms do have high constants and/or exponents, and are useless in practice. n Although 6.02´1023´N20 is technically polynomial-time, it would be useless in practice. ¤ Some exponential-time (or worse) algorithms are widely used because the worst-case instances seem to be rare. n This kind of algorithms may run quickly for average cases. Complexity 11 Simplex method Unix grep IRIS H.-R. JIANG Proposed Definition 3 (3/3) ¨ An algorithm is efficient if it has a polynomial running time. ¤ It really works in practice. ¤ The gulf between the growth rates of polynomial and exponential functions is enormous. ¤ It allow us to ask about the existence or nonexistence of efficient algorithms as a well-defined question. Complexity 12 n nlog2n n2 n3 1.5n 2n n! n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long The running times (rounded up) of different algorithms on inputs of increasing sizes, for a processor performing a million high-level instructions per second. (very long: > 1025 yrs) Non-polynomial time Polynomial time
  • 10. IRIS H.-R. JIANG Proposed Definition 2 (2/2) ¨ Proposed definition 2: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. ¨ Q: What is qualitatively better performance? ¨ A: We consider the actual running time of algorithms more carefully, and try to quantify what a reasonable running time would be. Complexity 9 IRIS H.-R. JIANG Proposed Definition 3 (1/3) ¨ We expect a good algorithm has a good scaling property. ¤ E.g., when the input size doubles, the algorithm should only slow down by some constant factor C. ¨ Polynomial running time: There exists constants c > 0 and d > 0 so that on every input of size N, its running time is bounded by cNd primitive computational steps. ¤ Proportional to Nd; the smaller d has a better scalibility. ¤ The algorithm has a polynomial running time if this running time bound holds for some c and d. ¨ Q: What is a primitive computational step? ¨ A: Each step corresponds to… ¤ A single assembly-language instruction on a standard processor ¤ One line of a standard programming language ¤ Simple enough and regardless to input size N Complexity 10 IRIS H.-R. JIANG Proposed Definition 3 (2/3) ¨ Proposed definition 3: An algorithm is efficient if it has a polynomial running time. ¨ Justification: It really works in practice! ¤ In practice, the polynomial-time algorithms that people develop almost always have low constants and low exponents. ¤ Breaking through the exponential barrier of brute force typically exposes some crucial structure of the problem. ¨ Exceptions ¤ Some polynomial-time algorithms do have high constants and/or exponents, and are useless in practice. n Although 6.02´1023´N20 is technically polynomial-time, it would be useless in practice. ¤ Some exponential-time (or worse) algorithms are widely used because the worst-case instances seem to be rare. n This kind of algorithms may run quickly for average cases. Complexity 11 Simplex method Unix grep IRIS H.-R. JIANG Proposed Definition 3 (3/3) ¨ An algorithm is efficient if it has a polynomial running time. ¤ It really works in practice. ¤ The gulf between the growth rates of polynomial and exponential functions is enormous. ¤ It allow us to ask about the existence or nonexistence of efficient algorithms as a well-defined question. Complexity 12 n nlog2n n2 n3 1.5n 2n n! n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long The running times (rounded up) of different algorithms on inputs of increasing sizes, for a processor performing a million high-level instructions per second. (very long: > 1025 yrs) Non-polynomial time Polynomial time
  • 11. IRIS H.-R. JIANG Proposed Definition 2 (2/2) ¨ Proposed definition 2: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. ¨ Q: What is qualitatively better performance? ¨ A: We consider the actual running time of algorithms more carefully, and try to quantify what a reasonable running time would be. Complexity 9 IRIS H.-R. JIANG Proposed Definition 3 (1/3) ¨ We expect a good algorithm has a good scaling property. ¤ E.g., when the input size doubles, the algorithm should only slow down by some constant factor C. ¨ Polynomial running time: There exists constants c > 0 and d > 0 so that on every input of size N, its running time is bounded by cNd primitive computational steps. ¤ Proportional to Nd; the smaller d has a better scalibility. ¤ The algorithm has a polynomial running time if this running time bound holds for some c and d. ¨ Q: What is a primitive computational step? ¨ A: Each step corresponds to… ¤ A single assembly-language instruction on a standard processor ¤ One line of a standard programming language ¤ Simple enough and regardless to input size N Complexity 10 IRIS H.-R. JIANG Proposed Definition 3 (2/3) ¨ Proposed definition 3: An algorithm is efficient if it has a polynomial running time. ¨ Justification: It really works in practice! ¤ In practice, the polynomial-time algorithms that people develop almost always have low constants and low exponents. ¤ Breaking through the exponential barrier of brute force typically exposes some crucial structure of the problem. ¨ Exceptions ¤ Some polynomial-time algorithms do have high constants and/or exponents, and are useless in practice. n Although 6.02´1023´N20 is technically polynomial-time, it would be useless in practice. ¤ Some exponential-time (or worse) algorithms are widely used because the worst-case instances seem to be rare. n This kind of algorithms may run quickly for average cases. Complexity 11 Simplex method Unix grep IRIS H.-R. JIANG Proposed Definition 3 (3/3) ¨ An algorithm is efficient if it has a polynomial running time. ¤ It really works in practice. ¤ The gulf between the growth rates of polynomial and exponential functions is enormous. ¤ It allow us to ask about the existence or nonexistence of efficient algorithms as a well-defined question. Complexity 12 n nlog2n n2 n3 1.5n 2n n! n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long The running times (rounded up) of different algorithms on inputs of increasing sizes, for a processor performing a million high-level instructions per second. (very long: > 1025 yrs) Non-polynomial time Polynomial time
  • 12. IRIS H.-R. JIANG Proposed Definition 2 (2/2) ¨ Proposed definition 2: An algorithm is efficient if it achieves qualitatively better worst-case performance, at an analytical level, than brute-force search. ¨ Q: What is qualitatively better performance? ¨ A: We consider the actual running time of algorithms more carefully, and try to quantify what a reasonable running time would be. Complexity 9 IRIS H.-R. JIANG Proposed Definition 3 (1/3) ¨ We expect a good algorithm has a good scaling property. ¤ E.g., when the input size doubles, the algorithm should only slow down by some constant factor C. ¨ Polynomial running time: There exists constants c > 0 and d > 0 so that on every input of size N, its running time is bounded by cNd primitive computational steps. ¤ Proportional to Nd; the smaller d has a better scalibility. ¤ The algorithm has a polynomial running time if this running time bound holds for some c and d. ¨ Q: What is a primitive computational step? ¨ A: Each step corresponds to… ¤ A single assembly-language instruction on a standard processor ¤ One line of a standard programming language ¤ Simple enough and regardless to input size N Complexity 10 IRIS H.-R. JIANG Proposed Definition 3 (2/3) ¨ Proposed definition 3: An algorithm is efficient if it has a polynomial running time. ¨ Justification: It really works in practice! ¤ In practice, the polynomial-time algorithms that people develop almost always have low constants and low exponents. ¤ Breaking through the exponential barrier of brute force typically exposes some crucial structure of the problem. ¨ Exceptions ¤ Some polynomial-time algorithms do have high constants and/or exponents, and are useless in practice. n Although 6.02´1023´N20 is technically polynomial-time, it would be useless in practice. ¤ Some exponential-time (or worse) algorithms are widely used because the worst-case instances seem to be rare. n This kind of algorithms may run quickly for average cases. Complexity 11 Simplex method Unix grep IRIS H.-R. JIANG Proposed Definition 3 (3/3) ¨ An algorithm is efficient if it has a polynomial running time. ¤ It really works in practice. ¤ The gulf between the growth rates of polynomial and exponential functions is enormous. ¤ It allow us to ask about the existence or nonexistence of efficient algorithms as a well-defined question. Complexity 12 n nlog2n n2 n3 1.5n 2n n! n = 10 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 4 sec n = 30 < 1 sec < 1 sec < 1 sec < 1 sec < 1 sec 18 min 1025 yrs n = 50 < 1 sec < 1 sec < 1 sec < 1 sec 11 min 36 yrs very long n = 100 < 1 sec < 1 sec < 1 sec 1 sec 12,892 yrs 1017 yrs very long n = 1,000 < 1 sec < 1 sec 1 sec 18 min very long very long very long n = 10,000 < 1 sec < 1 sec 2 min 12 days very long very long very long n = 100,000 < 1 sec 2 sec 3 hrs 32 yrs very long very long very long n = 1,000,000 1 sec 20 sec 12 days 31,710 yrs very long very long very long The running times (rounded up) of different algorithms on inputs of increasing sizes, for a processor performing a million high-level instructions per second. (very long: > 1025 yrs) Non-polynomial time Polynomial time
  • 13. Intrinsic computational tractability O, W, Q Asymptotic Order of Growth Complexity 13 IRIS H.-R. JIANG Intrinsic Computational Tractability ¨ Intrinsic computational tractability: An algorithm’s worst-case running time on inputs of size n grows at a rate that is at most proportional to some function f(n). ¤ f(n): an upper bound of the running time of the algorithm ¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps? ¨ A: We’d like to say it grows like n2 up to constant factors ¤ Too detailed ¤ Meaningless ¤ Hard to classify its efficiency n Our ultimate goal is to identify broad classes of algorithms that have similar behavior. n We’d actually like to classify running times at a coarser level of granularity so that similarities among different algorithms, and among different problems, show up more clearly. Complexity 14 Insensitive to constant factors and low-order terms IRIS H.-R. JIANG O, W, Q ¨ Let T(n) be a function to describe the worst-case running time of a certain algorithm on an input of size n. ¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n). ¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n). ¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and W(f(n)). ¤ Also, we can prove: Let f and g be two functions that exists and is equal to some number c > 0. Then f(n) = Q(g(n)). Complexity 15 ) ( ) ( lim n g n f n ¥ ® IRIS H.-R. JIANG Example: O, W, Q ¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false? 1. T(n) = O(n) 2. T(n) = O(n2) 3. T(n) = O(n3) 4. T(n) = W(n) 5. T(n) = W(n2) 6. T(n) = W(n3) 7. T(n) = Q(n) 8. T(n) = Q(n2) 9. T(n) = Q(n3) ¨ A: Complexity 16 In practice, we expect an O(.) as tight as possible because we seek an intrinsic running time.
  • 14. Intrinsic computational tractability O, W, Q Asymptotic Order of Growth Complexity 13 IRIS H.-R. JIANG Intrinsic Computational Tractability ¨ Intrinsic computational tractability: An algorithm’s worst-case running time on inputs of size n grows at a rate that is at most proportional to some function f(n). ¤ f(n): an upper bound of the running time of the algorithm ¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps? ¨ A: We’d like to say it grows like n2 up to constant factors ¤ Too detailed ¤ Meaningless ¤ Hard to classify its efficiency n Our ultimate goal is to identify broad classes of algorithms that have similar behavior. n We’d actually like to classify running times at a coarser level of granularity so that similarities among different algorithms, and among different problems, show up more clearly. Complexity 14 Insensitive to constant factors and low-order terms IRIS H.-R. JIANG O, W, Q ¨ Let T(n) be a function to describe the worst-case running time of a certain algorithm on an input of size n. ¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n). ¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n). ¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and W(f(n)). ¤ Also, we can prove: Let f and g be two functions that exists and is equal to some number c > 0. Then f(n) = Q(g(n)). Complexity 15 ) ( ) ( lim n g n f n ¥ ® IRIS H.-R. JIANG Example: O, W, Q ¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false? 1. T(n) = O(n) 2. T(n) = O(n2) 3. T(n) = O(n3) 4. T(n) = W(n) 5. T(n) = W(n2) 6. T(n) = W(n3) 7. T(n) = Q(n) 8. T(n) = Q(n2) 9. T(n) = Q(n3) ¨ A: Complexity 16 In practice, we expect an O(.) as tight as possible because we seek an intrinsic running time.
  • 15. Intrinsic computational tractability O, W, Q Asymptotic Order of Growth Complexity 13 IRIS H.-R. JIANG Intrinsic Computational Tractability ¨ Intrinsic computational tractability: An algorithm’s worst-case running time on inputs of size n grows at a rate that is at most proportional to some function f(n). ¤ f(n): an upper bound of the running time of the algorithm ¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps? ¨ A: We’d like to say it grows like n2 up to constant factors ¤ Too detailed ¤ Meaningless ¤ Hard to classify its efficiency n Our ultimate goal is to identify broad classes of algorithms that have similar behavior. n We’d actually like to classify running times at a coarser level of granularity so that similarities among different algorithms, and among different problems, show up more clearly. Complexity 14 Insensitive to constant factors and low-order terms IRIS H.-R. JIANG O, W, Q ¨ Let T(n) be a function to describe the worst-case running time of a certain algorithm on an input of size n. ¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n). ¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n). ¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and W(f(n)). ¤ Also, we can prove: Let f and g be two functions that exists and is equal to some number c > 0. Then f(n) = Q(g(n)). Complexity 15 ) ( ) ( lim n g n f n ¥ ® IRIS H.-R. JIANG Example: O, W, Q ¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false? 1. T(n) = O(n) 2. T(n) = O(n2) 3. T(n) = O(n3) 4. T(n) = W(n) 5. T(n) = W(n2) 6. T(n) = W(n3) 7. T(n) = Q(n) 8. T(n) = Q(n2) 9. T(n) = Q(n3) ¨ A: Complexity 16 In practice, we expect an O(.) as tight as possible because we seek an intrinsic running time.
  • 16. Intrinsic computational tractability O, W, Q Asymptotic Order of Growth Complexity 13 IRIS H.-R. JIANG Intrinsic Computational Tractability ¨ Intrinsic computational tractability: An algorithm’s worst-case running time on inputs of size n grows at a rate that is at most proportional to some function f(n). ¤ f(n): an upper bound of the running time of the algorithm ¨ Q: What’s wrong with 1.62n2 + 3.5n + 8 steps? ¨ A: We’d like to say it grows like n2 up to constant factors ¤ Too detailed ¤ Meaningless ¤ Hard to classify its efficiency n Our ultimate goal is to identify broad classes of algorithms that have similar behavior. n We’d actually like to classify running times at a coarser level of granularity so that similarities among different algorithms, and among different problems, show up more clearly. Complexity 14 Insensitive to constant factors and low-order terms IRIS H.-R. JIANG O, W, Q ¨ Let T(n) be a function to describe the worst-case running time of a certain algorithm on an input of size n. ¨ Asymptotic upper bound: T(n) = O(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) £ cf(n). ¨ Asymptotic lower bound: T(n) = W(f(n)) if there exist constants c > 0 and n0 ³ 0 such that for all n ³ n0 we have T(n) ³ cf(n). ¨ Asymptotic tight bound: T(n) = Q(f(n)) if T(n) is both O(f(n)) and W(f(n)). ¤ Also, we can prove: Let f and g be two functions that exists and is equal to some number c > 0. Then f(n) = Q(g(n)). Complexity 15 ) ( ) ( lim n g n f n ¥ ® IRIS H.-R. JIANG Example: O, W, Q ¨ Q: T(n) = 1.62n2 + 3.5n + 8, true or false? 1. T(n) = O(n) 2. T(n) = O(n2) 3. T(n) = O(n3) 4. T(n) = W(n) 5. T(n) = W(n2) 6. T(n) = W(n3) 7. T(n) = Q(n) 8. T(n) = Q(n2) 9. T(n) = Q(n3) ¨ A: Complexity 16 In practice, we expect an O(.) as tight as possible because we seek an intrinsic running time.
  • 17. IRIS H.-R. JIANG Abuse of Notation ¨ Q: Why using equality in T(n) = O(f(n))? ¤ Asymmetric: n f(n) = 5n3; g(n) = 3n2 n f(n) = O(n3) = g(n) n but f(n) ¹ g(n). ¤ Better notation: T(n) Î O(f(n)) n O(f(n)) forms a set ¨ Cf. “Is” in English ¤ Aristotle is a man, but a man isn’t necessarily Aristotle. Complexity 17 IRIS H.-R. JIANG Properties ¨ Transitivity: If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)), where P = O, W, or Q. ¨ Rule of sums: f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q. ¨ Rule of products: If f1(n) = P(g1(n)) and f2(n) = P(g2(n)), then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q. ¨ Transpose symmetry: f(n) = O(g(n)) iff g(n) = W(f(n)). ¨ Reflexivity: f(n) = P(f(n)), where P = O, W , or Q. ¨ Symmetry: f(n) = Q(g(n)) iff g(n) = Q(f(n)). Complexity 18 IRIS H.-R. JIANG Summary: Polynomial-Time Complexity ¨ Polynomial running time: O(p(n)) ¤ p(n) is a polynomial function of input size n (p(n) = nO(1)) ¤ a.k.a. polynomial-time complexity ¨ Order ¤ O(1): constant ¤ O(logn): logarithmic ¤ O(n0.5): sublinear ¤ O(n): linear ¤ O(nlogn): loglinear ¤ O(n2): quadratic ¤ O(n3): cubic ¤ O(n4): quartic ¤ O(2n): exponential ¤ O(n!): factorial ¤ O(nn) Complexity 19 Faster Slower Arrays and lists Implementing Gale-Shapley Algorithm Complexity 20
  • 18. IRIS H.-R. JIANG Abuse of Notation ¨ Q: Why using equality in T(n) = O(f(n))? ¤ Asymmetric: n f(n) = 5n3; g(n) = 3n2 n f(n) = O(n3) = g(n) n but f(n) ¹ g(n). ¤ Better notation: T(n) Î O(f(n)) n O(f(n)) forms a set ¨ Cf. “Is” in English ¤ Aristotle is a man, but a man isn’t necessarily Aristotle. Complexity 17 IRIS H.-R. JIANG Properties ¨ Transitivity: If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)), where P = O, W, or Q. ¨ Rule of sums: f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q. ¨ Rule of products: If f1(n) = P(g1(n)) and f2(n) = P(g2(n)), then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q. ¨ Transpose symmetry: f(n) = O(g(n)) iff g(n) = W(f(n)). ¨ Reflexivity: f(n) = P(f(n)), where P = O, W , or Q. ¨ Symmetry: f(n) = Q(g(n)) iff g(n) = Q(f(n)). Complexity 18 IRIS H.-R. JIANG Summary: Polynomial-Time Complexity ¨ Polynomial running time: O(p(n)) ¤ p(n) is a polynomial function of input size n (p(n) = nO(1)) ¤ a.k.a. polynomial-time complexity ¨ Order ¤ O(1): constant ¤ O(logn): logarithmic ¤ O(n0.5): sublinear ¤ O(n): linear ¤ O(nlogn): loglinear ¤ O(n2): quadratic ¤ O(n3): cubic ¤ O(n4): quartic ¤ O(2n): exponential ¤ O(n!): factorial ¤ O(nn) Complexity 19 Faster Slower Arrays and lists Implementing Gale-Shapley Algorithm Complexity 20
  • 19. IRIS H.-R. JIANG Abuse of Notation ¨ Q: Why using equality in T(n) = O(f(n))? ¤ Asymmetric: n f(n) = 5n3; g(n) = 3n2 n f(n) = O(n3) = g(n) n but f(n) ¹ g(n). ¤ Better notation: T(n) Î O(f(n)) n O(f(n)) forms a set ¨ Cf. “Is” in English ¤ Aristotle is a man, but a man isn’t necessarily Aristotle. Complexity 17 IRIS H.-R. JIANG Properties ¨ Transitivity: If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)), where P = O, W, or Q. ¨ Rule of sums: f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q. ¨ Rule of products: If f1(n) = P(g1(n)) and f2(n) = P(g2(n)), then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q. ¨ Transpose symmetry: f(n) = O(g(n)) iff g(n) = W(f(n)). ¨ Reflexivity: f(n) = P(f(n)), where P = O, W , or Q. ¨ Symmetry: f(n) = Q(g(n)) iff g(n) = Q(f(n)). Complexity 18 IRIS H.-R. JIANG Summary: Polynomial-Time Complexity ¨ Polynomial running time: O(p(n)) ¤ p(n) is a polynomial function of input size n (p(n) = nO(1)) ¤ a.k.a. polynomial-time complexity ¨ Order ¤ O(1): constant ¤ O(logn): logarithmic ¤ O(n0.5): sublinear ¤ O(n): linear ¤ O(nlogn): loglinear ¤ O(n2): quadratic ¤ O(n3): cubic ¤ O(n4): quartic ¤ O(2n): exponential ¤ O(n!): factorial ¤ O(nn) Complexity 19 Faster Slower Arrays and lists Implementing Gale-Shapley Algorithm Complexity 20
  • 20. IRIS H.-R. JIANG Abuse of Notation ¨ Q: Why using equality in T(n) = O(f(n))? ¤ Asymmetric: n f(n) = 5n3; g(n) = 3n2 n f(n) = O(n3) = g(n) n but f(n) ¹ g(n). ¤ Better notation: T(n) Î O(f(n)) n O(f(n)) forms a set ¨ Cf. “Is” in English ¤ Aristotle is a man, but a man isn’t necessarily Aristotle. Complexity 17 IRIS H.-R. JIANG Properties ¨ Transitivity: If f(n) = P(g(n)) and g(n) = P(h(n)), then f(n) = P(h(n)), where P = O, W, or Q. ¨ Rule of sums: f(n) + g(n) = P(max{f(n), g(n)}), where P = O, W, or Q. ¨ Rule of products: If f1(n) = P(g1(n)) and f2(n) = P(g2(n)), then f1(n)f2(n) = P(g1(n)g2(n)), where P = O, W, or Q. ¨ Transpose symmetry: f(n) = O(g(n)) iff g(n) = W(f(n)). ¨ Reflexivity: f(n) = P(f(n)), where P = O, W , or Q. ¨ Symmetry: f(n) = Q(g(n)) iff g(n) = Q(f(n)). Complexity 18 IRIS H.-R. JIANG Summary: Polynomial-Time Complexity ¨ Polynomial running time: O(p(n)) ¤ p(n) is a polynomial function of input size n (p(n) = nO(1)) ¤ a.k.a. polynomial-time complexity ¨ Order ¤ O(1): constant ¤ O(logn): logarithmic ¤ O(n0.5): sublinear ¤ O(n): linear ¤ O(nlogn): loglinear ¤ O(n2): quadratic ¤ O(n3): cubic ¤ O(n4): quartic ¤ O(2n): exponential ¤ O(n!): factorial ¤ O(nn) Complexity 19 Faster Slower Arrays and lists Implementing Gale-Shapley Algorithm Complexity 20
  • 21. IRIS H.-R. JIANG Recap: Gale-Shapley Algorithm ¨ Correctness: ¤ Termination: G-S terminates after at most n2 iterations. ¤ Perfection: Everyone gets married. ¤ Stability: The marriages are stable. ¨ Male-optimal and female-pessimal ¨ All executions yield the same matching Complexity 21 O(n2)? Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG What to Do? ¨ Goal: Each iteration takes O(1) time and then O(n2) in total. ¤ Line 2: Identify a free man. ¤ Line 3: For a man m, identify the highest ranked woman whom he hasn’t yet proposed. ¤ Line 4: For a woman w, decide if w is currently engaged, and if so, identify her current partner. ¤ Line 6: For a woman w and two men m and m', decide which of m and m' is preferred by w. ¤ Complexity 22 Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG Representing Men and Women ¨ Q: How to represent men and women? ¨ A: Assume the set of men and women are both {1, …, n}. ¤ Record in two arrays ¨ Q: How to store men’s preference lists? ¨ A: Record in an nxn array or (n arrays, each of length n) Complexity 23 Amy Bertha Clare Women 1 2 3 ID 1 2 3 ID Xavier Yancey Zeus Amy Bertha Amy Bertha Amy Bertha Clare Clare Clare Men’s Preference Profile 1 2 1 2 1 2 3 3 3 ManPref[ , ] Men Yancey Xavier Xavier Xavier Yancey Yancey Zeus Zeus Zeus Women’s Preference Profile ManPref[m, i]: ith woman in m’s list IRIS H.-R. JIANG Identifying a Free Man ¨ Q: Line 2: How to identify a free man in O(1) time? ¨ A: ¤ Since the set of free men is dynamic, a static array is not good for insertion/deletion. ¤ How about a linked list? It can be accessed in O(1) time and allows various sizes. n Read/insert/delete from first Complexity 24 1 2 3 0 first data link(pointer) 1 2 3 0 first delete 1 insert 1
  • 22. IRIS H.-R. JIANG Recap: Gale-Shapley Algorithm ¨ Correctness: ¤ Termination: G-S terminates after at most n2 iterations. ¤ Perfection: Everyone gets married. ¤ Stability: The marriages are stable. ¨ Male-optimal and female-pessimal ¨ All executions yield the same matching Complexity 21 O(n2)? Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG What to Do? ¨ Goal: Each iteration takes O(1) time and then O(n2) in total. ¤ Line 2: Identify a free man. ¤ Line 3: For a man m, identify the highest ranked woman whom he hasn’t yet proposed. ¤ Line 4: For a woman w, decide if w is currently engaged, and if so, identify her current partner. ¤ Line 6: For a woman w and two men m and m', decide which of m and m' is preferred by w. ¤ Complexity 22 Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG Representing Men and Women ¨ Q: How to represent men and women? ¨ A: Assume the set of men and women are both {1, …, n}. ¤ Record in two arrays ¨ Q: How to store men’s preference lists? ¨ A: Record in an nxn array or (n arrays, each of length n) Complexity 23 Amy Bertha Clare Women 1 2 3 ID 1 2 3 ID Xavier Yancey Zeus Amy Bertha Amy Bertha Amy Bertha Clare Clare Clare Men’s Preference Profile 1 2 1 2 1 2 3 3 3 ManPref[ , ] Men Yancey Xavier Xavier Xavier Yancey Yancey Zeus Zeus Zeus Women’s Preference Profile ManPref[m, i]: ith woman in m’s list IRIS H.-R. JIANG Identifying a Free Man ¨ Q: Line 2: How to identify a free man in O(1) time? ¨ A: ¤ Since the set of free men is dynamic, a static array is not good for insertion/deletion. ¤ How about a linked list? It can be accessed in O(1) time and allows various sizes. n Read/insert/delete from first Complexity 24 1 2 3 0 first data link(pointer) 1 2 3 0 first delete 1 insert 1
  • 23. IRIS H.-R. JIANG Recap: Gale-Shapley Algorithm ¨ Correctness: ¤ Termination: G-S terminates after at most n2 iterations. ¤ Perfection: Everyone gets married. ¤ Stability: The marriages are stable. ¨ Male-optimal and female-pessimal ¨ All executions yield the same matching Complexity 21 O(n2)? Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG What to Do? ¨ Goal: Each iteration takes O(1) time and then O(n2) in total. ¤ Line 2: Identify a free man. ¤ Line 3: For a man m, identify the highest ranked woman whom he hasn’t yet proposed. ¤ Line 4: For a woman w, decide if w is currently engaged, and if so, identify her current partner. ¤ Line 6: For a woman w and two men m and m', decide which of m and m' is preferred by w. ¤ Complexity 22 Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG Representing Men and Women ¨ Q: How to represent men and women? ¨ A: Assume the set of men and women are both {1, …, n}. ¤ Record in two arrays ¨ Q: How to store men’s preference lists? ¨ A: Record in an nxn array or (n arrays, each of length n) Complexity 23 Amy Bertha Clare Women 1 2 3 ID 1 2 3 ID Xavier Yancey Zeus Amy Bertha Amy Bertha Amy Bertha Clare Clare Clare Men’s Preference Profile 1 2 1 2 1 2 3 3 3 ManPref[ , ] Men Yancey Xavier Xavier Xavier Yancey Yancey Zeus Zeus Zeus Women’s Preference Profile ManPref[m, i]: ith woman in m’s list IRIS H.-R. JIANG Identifying a Free Man ¨ Q: Line 2: How to identify a free man in O(1) time? ¨ A: ¤ Since the set of free men is dynamic, a static array is not good for insertion/deletion. ¤ How about a linked list? It can be accessed in O(1) time and allows various sizes. n Read/insert/delete from first Complexity 24 1 2 3 0 first data link(pointer) 1 2 3 0 first delete 1 insert 1
  • 24. IRIS H.-R. JIANG Recap: Gale-Shapley Algorithm ¨ Correctness: ¤ Termination: G-S terminates after at most n2 iterations. ¤ Perfection: Everyone gets married. ¤ Stability: The marriages are stable. ¨ Male-optimal and female-pessimal ¨ All executions yield the same matching Complexity 21 O(n2)? Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG What to Do? ¨ Goal: Each iteration takes O(1) time and then O(n2) in total. ¤ Line 2: Identify a free man. ¤ Line 3: For a man m, identify the highest ranked woman whom he hasn’t yet proposed. ¤ Line 4: For a woman w, decide if w is currently engaged, and if so, identify her current partner. ¤ Line 6: For a woman w and two men m and m', decide which of m and m' is preferred by w. ¤ Complexity 22 Gale-Shapley 1. initialize each person to be free 2. while (some man m is free and hasn't proposed to every woman) do 3. w = highest ranked woman in m's list to whom m has not yet proposed 4. if (w is free) then 5. (m, w) become engaged 6. else if (w prefers m to her fiancé m') then 7. (m, w) become engaged 8. m' become free 9. return the set S of engaged pairs IRIS H.-R. JIANG Representing Men and Women ¨ Q: How to represent men and women? ¨ A: Assume the set of men and women are both {1, …, n}. ¤ Record in two arrays ¨ Q: How to store men’s preference lists? ¨ A: Record in an nxn array or (n arrays, each of length n) Complexity 23 Amy Bertha Clare Women 1 2 3 ID 1 2 3 ID Xavier Yancey Zeus Amy Bertha Amy Bertha Amy Bertha Clare Clare Clare Men’s Preference Profile 1 2 1 2 1 2 3 3 3 ManPref[ , ] Men Yancey Xavier Xavier Xavier Yancey Yancey Zeus Zeus Zeus Women’s Preference Profile ManPref[m, i]: ith woman in m’s list IRIS H.-R. JIANG Identifying a Free Man ¨ Q: Line 2: How to identify a free man in O(1) time? ¨ A: ¤ Since the set of free men is dynamic, a static array is not good for insertion/deletion. ¤ How about a linked list? It can be accessed in O(1) time and allows various sizes. n Read/insert/delete from first Complexity 24 1 2 3 0 first data link(pointer) 1 2 3 0 first delete 1 insert 1
  • 25. IRIS H.-R. JIANG Whom to Propose? ¨ Q: Line 3: For a man m, how to identify the highest ranked woman whom he hasn’t yet proposed in O(1) time? ¨ A: ¤ An extra array Next[] indicates for each man m the position of the next woman he will propose to on his list. n m will propose to w = ManPref[m, Next[m]] n Next[m]++ after proposal, regardless of rejection/acceptance Complexity 25 1 1 1 Next Xavier Yancey Zeus Men 1 2 3 ID Initial value 1 2 1 2 1 2 3 3 3 ManPref[ , ] 2 2 3 Next In progress 1 2 1 2 1 2 3 3 3 ManPref[ , ] IRIS H.-R. JIANG Identifying Her Fiancé ¨ Q: Line 4: For a woman w, how to decide if w is currently engaged? If so, how to identify her current fiancé in O(1) time? ¨ A: ¤ An array Current[] of length n. n w’s current fiancé = Current[w] n Initially, Current[w] = 0 (unmatched) Complexity 26 0 0 0 Current Initial value 2 1 1 1 2 2 3 3 3 WomanPref[ , ] Amy Bertha Clare Women 1 2 3 ID Xavier Yancey Zeus Men 1 2 3 ID 2 1 1 1 2 2 3 3 3 WomanPref[ , ] 1 3 0 Current In progress IRIS H.-R. JIANG Women’s Preferences (1/2) ¨ Q: Line 6: For a woman w and two men m and m', how to decide w prefers m or m' in O(1)? ¨ A: ¤ An array WomanPref[w, i] is analogous to ManPref[m, i] ¨ Q: Does it work? Why? ¨ A: ¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time ¤ When man 1 proposes to woman 1 … Complexity 27 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th IRIS H.-R. JIANG Women’s Preferences (2/2) ¨ Q: Can we do better? ¨ A: Yes. ¤ Create an nxn array Ranking[ , ] at the beginning. ¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list) ¤ When man 1 proposes to woman 1, compare them in O(1) time. ¤ Construct Ranking[ , ] while parsing inputs in O(n2) time. Complexity 28 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th 3 4 5 2 6 1 Ranking[ , ] 1 2 3 4 5 6 Men for (i=1 to n) do Ranking[WomanPref[w, i]] = i
  • 26. IRIS H.-R. JIANG Whom to Propose? ¨ Q: Line 3: For a man m, how to identify the highest ranked woman whom he hasn’t yet proposed in O(1) time? ¨ A: ¤ An extra array Next[] indicates for each man m the position of the next woman he will propose to on his list. n m will propose to w = ManPref[m, Next[m]] n Next[m]++ after proposal, regardless of rejection/acceptance Complexity 25 1 1 1 Next Xavier Yancey Zeus Men 1 2 3 ID Initial value 1 2 1 2 1 2 3 3 3 ManPref[ , ] 2 2 3 Next In progress 1 2 1 2 1 2 3 3 3 ManPref[ , ] IRIS H.-R. JIANG Identifying Her Fiancé ¨ Q: Line 4: For a woman w, how to decide if w is currently engaged? If so, how to identify her current fiancé in O(1) time? ¨ A: ¤ An array Current[] of length n. n w’s current fiancé = Current[w] n Initially, Current[w] = 0 (unmatched) Complexity 26 0 0 0 Current Initial value 2 1 1 1 2 2 3 3 3 WomanPref[ , ] Amy Bertha Clare Women 1 2 3 ID Xavier Yancey Zeus Men 1 2 3 ID 2 1 1 1 2 2 3 3 3 WomanPref[ , ] 1 3 0 Current In progress IRIS H.-R. JIANG Women’s Preferences (1/2) ¨ Q: Line 6: For a woman w and two men m and m', how to decide w prefers m or m' in O(1)? ¨ A: ¤ An array WomanPref[w, i] is analogous to ManPref[m, i] ¨ Q: Does it work? Why? ¨ A: ¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time ¤ When man 1 proposes to woman 1 … Complexity 27 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th IRIS H.-R. JIANG Women’s Preferences (2/2) ¨ Q: Can we do better? ¨ A: Yes. ¤ Create an nxn array Ranking[ , ] at the beginning. ¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list) ¤ When man 1 proposes to woman 1, compare them in O(1) time. ¤ Construct Ranking[ , ] while parsing inputs in O(n2) time. Complexity 28 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th 3 4 5 2 6 1 Ranking[ , ] 1 2 3 4 5 6 Men for (i=1 to n) do Ranking[WomanPref[w, i]] = i
  • 27. IRIS H.-R. JIANG Whom to Propose? ¨ Q: Line 3: For a man m, how to identify the highest ranked woman whom he hasn’t yet proposed in O(1) time? ¨ A: ¤ An extra array Next[] indicates for each man m the position of the next woman he will propose to on his list. n m will propose to w = ManPref[m, Next[m]] n Next[m]++ after proposal, regardless of rejection/acceptance Complexity 25 1 1 1 Next Xavier Yancey Zeus Men 1 2 3 ID Initial value 1 2 1 2 1 2 3 3 3 ManPref[ , ] 2 2 3 Next In progress 1 2 1 2 1 2 3 3 3 ManPref[ , ] IRIS H.-R. JIANG Identifying Her Fiancé ¨ Q: Line 4: For a woman w, how to decide if w is currently engaged? If so, how to identify her current fiancé in O(1) time? ¨ A: ¤ An array Current[] of length n. n w’s current fiancé = Current[w] n Initially, Current[w] = 0 (unmatched) Complexity 26 0 0 0 Current Initial value 2 1 1 1 2 2 3 3 3 WomanPref[ , ] Amy Bertha Clare Women 1 2 3 ID Xavier Yancey Zeus Men 1 2 3 ID 2 1 1 1 2 2 3 3 3 WomanPref[ , ] 1 3 0 Current In progress IRIS H.-R. JIANG Women’s Preferences (1/2) ¨ Q: Line 6: For a woman w and two men m and m', how to decide w prefers m or m' in O(1)? ¨ A: ¤ An array WomanPref[w, i] is analogous to ManPref[m, i] ¨ Q: Does it work? Why? ¨ A: ¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time ¤ When man 1 proposes to woman 1 … Complexity 27 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th IRIS H.-R. JIANG Women’s Preferences (2/2) ¨ Q: Can we do better? ¨ A: Yes. ¤ Create an nxn array Ranking[ , ] at the beginning. ¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list) ¤ When man 1 proposes to woman 1, compare them in O(1) time. ¤ Construct Ranking[ , ] while parsing inputs in O(n2) time. Complexity 28 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th 3 4 5 2 6 1 Ranking[ , ] 1 2 3 4 5 6 Men for (i=1 to n) do Ranking[WomanPref[w, i]] = i
  • 28. IRIS H.-R. JIANG Whom to Propose? ¨ Q: Line 3: For a man m, how to identify the highest ranked woman whom he hasn’t yet proposed in O(1) time? ¨ A: ¤ An extra array Next[] indicates for each man m the position of the next woman he will propose to on his list. n m will propose to w = ManPref[m, Next[m]] n Next[m]++ after proposal, regardless of rejection/acceptance Complexity 25 1 1 1 Next Xavier Yancey Zeus Men 1 2 3 ID Initial value 1 2 1 2 1 2 3 3 3 ManPref[ , ] 2 2 3 Next In progress 1 2 1 2 1 2 3 3 3 ManPref[ , ] IRIS H.-R. JIANG Identifying Her Fiancé ¨ Q: Line 4: For a woman w, how to decide if w is currently engaged? If so, how to identify her current fiancé in O(1) time? ¨ A: ¤ An array Current[] of length n. n w’s current fiancé = Current[w] n Initially, Current[w] = 0 (unmatched) Complexity 26 0 0 0 Current Initial value 2 1 1 1 2 2 3 3 3 WomanPref[ , ] Amy Bertha Clare Women 1 2 3 ID Xavier Yancey Zeus Men 1 2 3 ID 2 1 1 1 2 2 3 3 3 WomanPref[ , ] 1 3 0 Current In progress IRIS H.-R. JIANG Women’s Preferences (1/2) ¨ Q: Line 6: For a woman w and two men m and m', how to decide w prefers m or m' in O(1)? ¨ A: ¤ An array WomanPref[w, i] is analogous to ManPref[m, i] ¨ Q: Does it work? Why? ¨ A: ¤ No. Scan WomanPref[ , ] to extract the rank in O(n) time ¤ When man 1 proposes to woman 1 … Complexity 27 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th IRIS H.-R. JIANG Women’s Preferences (2/2) ¨ Q: Can we do better? ¨ A: Yes. ¤ Create an nxn array Ranking[ , ] at the beginning. ¤ Ranking[w, m] = the rank of m is w’s preference list (inverse list) ¤ When man 1 proposes to woman 1, compare them in O(1) time. ¤ Construct Ranking[ , ] while parsing inputs in O(n2) time. Complexity 28 Amy Women 1 ID 6 Current 5 6 2 3 1 4 WomanPref[ , ] 1st 2nd 3rd 4th 5th 6th 3 4 5 2 6 1 Ranking[ , ] 1 2 3 4 5 6 Men for (i=1 to n) do Ranking[WomanPref[w, i]] = i
  • 29. IRIS H.-R. JIANG What Did We Learn? ¨ The meaning of efficiency ¤ Efficiency Û polynomial-time complexity ¨ The analysis of an algorithm ¤ We try to figure out the intrinsic running time before implementing. ¤ Data structures play an important role; if necessary, preprocess the data into a proper form. Complexity 29 IRIS H.-R. JIANG Running Times Comparison ¨ Q: Arrange the following list of functions in ascending order of growth rate. ¤ f1(n) = n1/3 ¤ f2(n) = lg n (lg = log2) ¤ f3(n) = 2!lg n ¨ A: Convert them into a common form: ¤ Take logarithm! ¤ Let z = lg n ¤ lg f1(n) = lg n1/3 = (1/3)log n = (1/3)z ¤ lg f2(n) = lg (lg n) = lg z ¤ lg f3(n) = lg (2!lg n) = !lg n = !z ¤ DIY the rest! (see Solved Exercise 1) Complexity 30
  • 30. IRIS H.-R. JIANG What Did We Learn? ¨ The meaning of efficiency ¤ Efficiency Û polynomial-time complexity ¨ The analysis of an algorithm ¤ We try to figure out the intrinsic running time before implementing. ¤ Data structures play an important role; if necessary, preprocess the data into a proper form. Complexity 29 IRIS H.-R. JIANG Running Times Comparison ¨ Q: Arrange the following list of functions in ascending order of growth rate. ¤ f1(n) = n1/3 ¤ f2(n) = lg n (lg = log2) ¤ f3(n) = 2!lg n ¨ A: Convert them into a common form: ¤ Take logarithm! ¤ Let z = lg n ¤ lg f1(n) = lg n1/3 = (1/3)log n = (1/3)z ¤ lg f2(n) = lg (lg n) = lg z ¤ lg f3(n) = lg (2!lg n) = !lg n = !z ¤ DIY the rest! (see Solved Exercise 1) Complexity 30