3. Motivation Big “Oh” notation Compares growth of functions Common classes are How does 𝑂(𝑛log𝑛)fit? Compared to 𝑂𝑛1.5 or 𝑂(𝑛)? Other Authors Topic barely addressed in texts 𝑂1, 𝑂𝑛, 𝑂𝑛log𝑛, 𝑂𝑛2, 𝑂(2𝑛)
4. Approximation Technique 1 Integration Integrate the log function 𝐹𝑥= 𝑓𝑥𝑑𝑥= log𝑥𝑑𝑥=𝑥𝑙𝑜𝑔 𝑥 −𝑥+𝐶 Note that log x is still present, presenting recursion Did not pursue further
5. Approximation Technique 2 Derivation Derive the log function 𝑓′𝑥=1𝑥=𝑥−1 What if we twiddle with the exponent by ±.01 and integrate? 𝑔𝑥=100𝑥0.01−100
6. Approximation 2 Results Error at x = 50 is ±4.2% Error grows with increasing x Can be reduced with more significant figures
7. Approximation Technique 3 Taylor Series Infinite series Reasonable approximation truncates series Argument must be < 1 to converge
8. Approximation 3 Results Good approximation, even with only 3 terms But approximation only valid for small region
9. Approximation Technique 4 Chebychev Polynomial Infinite Series Approximates “minimax” properties Peak error is minimized in some interval Slightly better convergence than Taylor
10. Approximation 4 Results Centered about 0 Can be shifted Really bad approximation outside region of convergence Good approximation inside
11. Conclusions Infinite series not well suited to task Too much error in portions of number line Derivation attempt is best 𝑔𝑥=100𝑥0.01−100
12. Applications Suppose two algorithms run in 𝑂(𝑛log𝑛)and 𝑂(𝑛1.5) Which is faster? Since log 𝑛=𝑜𝑛0.01, the𝑂(𝑛log𝑛 ) algorithm is faster.
13. What base is that? Base in this presentation is always e. Base conversion was insignificant portion of work Change of Base formula always sufficient
14. The End Slides will be posted on JoshWoody.com tonight Questions, Concerns, or Comments?