This document provides an overview of algorithm analysis and determining the time complexity of algorithms. It discusses that the time an algorithm takes to run can be estimated by counting the number of basic operations and expressing the runtime using asymptotic notation. Examples are provided to demonstrate how to analyze the runtime of simple algorithms with loops and nested loops. The key growth rates like constant, linear, quadratic, and exponential are defined. Determining the highest order term provides the overall time complexity of an algorithm in Big O notation.
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
PPT on Analysis Of Algorithms.
The ppt includes Algorithms,notations,analysis,analysis of algorithms,theta notation, big oh notation, omega notation, notation graphs
Introduction to sorting algorithms with a broad overview (in Ruby pseudocode) of each sort's main features, beginning with Insertion Sort. *gifs do not show properly
Algorithm and its Properties
Computational Complexity
TIME COMPLEXITY
SPACE COMPLEXITY
Complexity Analysis and Asymptotic notations.
Big-oh-notation (O)
Omega-notation (Ω)
Theta-notation (Θ)
The Best, Average, and Worst Case Analyses.
COMPLEXITY Analyses EXAMPLES.
Comparing GROWTH RATES
Controlled dropout: a different dropout for improving training speed on deep ...Byung Soo Ko
"Controlled Dropout" is a different dropout method which is for improving training speed on deep neural networks. Basic idea and algorithm of controlled dropout are based on the paper "Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network" which was presented in IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2017.
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEHONGJOO LEE
45 min talk about collecting home network performance measures, analyzing and forecasting time series data, and building anomaly detection system.
In this talk, we will go through the whole process of data mining and knowledge discovery. Firstly we write a script to run speed test periodically and log the metric. Then we parse the log data and convert them into a time series and visualize the data for a certain period.
Next we conduct some data analysis; finding trends, forecasting, and detecting anomalous data. There will be several statistic or deep learning techniques used for the analysis; ARIMA (Autoregressive Integrated Moving Average), LSTM (Long Short Term Memory).
K-Sort: A New Sorting Algorithm that Beats Heap Sort for n 70 Lakhs!idescitation
Sundararajan and Chakraborty [10] introduced a new
version of Quick sort removing the interchanges. Khreisat
[6] found this algorithm to be competing well with some other
versions of Quick sort. However, it uses an auxiliary array
thereby increasing the space complexity. Here, we provide a
second version of our new sort where we have removed the
auxiliary array. This second improved version of the algorithm,
which we call K-sort, is found to sort elements faster than
Heap sort for an appreciably large array size (n 70,00,000)
for uniform U[0, 1] inputs.
Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for the analysis process.
Introduction to sorting algorithms with a broad overview (in Ruby pseudocode) of each sort's main features, beginning with Insertion Sort. *gifs do not show properly
Algorithm and its Properties
Computational Complexity
TIME COMPLEXITY
SPACE COMPLEXITY
Complexity Analysis and Asymptotic notations.
Big-oh-notation (O)
Omega-notation (Ω)
Theta-notation (Θ)
The Best, Average, and Worst Case Analyses.
COMPLEXITY Analyses EXAMPLES.
Comparing GROWTH RATES
Controlled dropout: a different dropout for improving training speed on deep ...Byung Soo Ko
"Controlled Dropout" is a different dropout method which is for improving training speed on deep neural networks. Basic idea and algorithm of controlled dropout are based on the paper "Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network" which was presented in IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2017.
EuroPython 2017 - PyData - Deep Learning your Broadband Network @ HOMEHONGJOO LEE
45 min talk about collecting home network performance measures, analyzing and forecasting time series data, and building anomaly detection system.
In this talk, we will go through the whole process of data mining and knowledge discovery. Firstly we write a script to run speed test periodically and log the metric. Then we parse the log data and convert them into a time series and visualize the data for a certain period.
Next we conduct some data analysis; finding trends, forecasting, and detecting anomalous data. There will be several statistic or deep learning techniques used for the analysis; ARIMA (Autoregressive Integrated Moving Average), LSTM (Long Short Term Memory).
K-Sort: A New Sorting Algorithm that Beats Heap Sort for n 70 Lakhs!idescitation
Sundararajan and Chakraborty [10] introduced a new
version of Quick sort removing the interchanges. Khreisat
[6] found this algorithm to be competing well with some other
versions of Quick sort. However, it uses an auxiliary array
thereby increasing the space complexity. Here, we provide a
second version of our new sort where we have removed the
auxiliary array. This second improved version of the algorithm,
which we call K-sort, is found to sort elements faster than
Heap sort for an appreciably large array size (n 70,00,000)
for uniform U[0, 1] inputs.
Whenever we want to perform analysis of an algorithm, we need to calculate the complexity of that algorithm. But when we calculate complexity of an algorithm it does not provide exact amount of resource required. So instead of taking exact amount of resource we represent that complexity in a general form (Notation) which produces the basic nature of that algorithm. We use that general form (Notation) for the analysis process.
Unit 1: Fundamentals of the Analysis of Algorithmic Efficiency, Units for Measuring Running Time, PROPERTIES OF AN ALGORITHM, Growth of Functions, Algorithm - Analysis, Asymptotic Notations, Recurrence Relation and problems
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
2. CENG 213 Data Structures 2
Algorithm
• An algorithm is a set of instructions to be followed to
solve a problem.
– There can be more than one solution (more than one
algorithm) to solve a given problem.
– An algorithm can be implemented using different
programming languages on different platforms.
• An algorithm must be correct. It should correctly solve
the problem.
– e.g. For sorting, this means even if (1) the input is already
sorted, or (2) it contains repeated elements.
• Once we have a correct algorithm for a problem, we
have to determine the efficiency of that algorithm.
3. CENG 213 Data Structures 3
Algorithmic Performance
There are two aspects of algorithmic performance:
• Time
• Instructions take time.
• How fast does the algorithm perform?
• What affects its runtime?
• Space
• Data structures take space
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?
We will focus on time:
– How to estimate the time required for an algorithm
– How to reduce the time required
4. CENG 213 Data Structures 4
Analysis of Algorithms
• Analysis of Algorithms is the area of computer science that
provides tools to analyze the efficiency of different methods of
solutions.
• How do we compare the time efficiency of two algorithms that
solve the same problem?
Naïve Approach: implement these algorithms in a programming
language (C++), and run them to compare their time
requirements. Comparing the programs (instead of algorithms)
has difficulties.
– How are the algorithms coded?
• Comparing running times means comparing the implementations.
• We should not compare implementations, because they are sensitive to programming
style that may cloud the issue of which algorithm is inherently more efficient.
– What computer should we use?
• We should compare the efficiency of the algorithms independently of a particular
computer.
– What data should the program use?
• Any analysis must be independent of specific data.
5. CENG 213 Data Structures 5
Analysis of Algorithms
• When we analyze algorithms, we should employ
mathematical techniques that analyze algorithms
independently of specific implementations,
computers, or data.
• To analyze algorithms:
– First, we start to count the number of significant
operations in a particular solution to assess its
efficiency.
– Then, we will express the efficiency of algorithms
using growth functions.
6. CENG 213 Data Structures 6
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
Each operation takes a certain of time.
count = count + 1; take a certain amount of time, but it is constant
A sequence of operations:
count = count + 1; Cost: c1
sum = sum + count; Cost: c2
Total Cost = c1 + c2
7. CENG 213 Data Structures 7
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1
Total Cost <= c1 + max(c2,c3)
8. CENG 213 Data Structures 8
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4
n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
The time required for this algorithm is proportional to n
9. CENG 213 Data Structures 9
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
The time required for this algorithm is proportional to n2
10. CENG 213 Data Structures 10
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.
• Nested Loops: Running time of a nested loop containing a
statement in the inner most loop is the running time of
statement multiplied by the product of the sized of all loops.
• Consecutive Statements: Just add the running times of those
consecutive statements.
• If/Else: Never more than the running time of the test plus the
larger of running times of S1 and S2.
11. CENG 213 Data Structures 11
Algorithm Growth Rates
• We measure an algorithm’s time requirement as a function of the
problem size.
– Problem size depends on the application: e.g. number of elements in a list for a
sorting algorithm, the number disks for towers of hanoi.
• So, for instance, we say that (if the problem size is n)
– Algorithm A requires 5*n2
time units to solve a problem of size n.
– Algorithm B requires 7*n time units to solve a problem of size n.
• The most important thing to learn is how quickly the algorithm’s
time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2
.
– Algorithm B requires time proportional to n.
• An algorithm’s proportional time requirement is known as
growth rate.
• We can compare the efficiency of two algorithms by comparing
their growth rates.
12. CENG 213 Data Structures 12
Algorithm Growth Rates (cont.)
Time requirements as a function
of the problem size n
13. CENG 213 Data Structures 13
Common Growth Rates
Function Growth Rate Name
c Constant
log N Logarithmic
log2
N Log-squared
N Linear
N log N
N2
Quadratic
N3
Cubic
2N
Exponential
14. CENG 213 Data Structures 14
Figure 6.1
Running times for small inputs
15. CENG 213 Data Structures 15
Figure 6.2
Running times for moderate inputs
16. CENG 213 Data Structures 16
Order-of-Magnitude Analysis and Big O
Notation
• If Algorithm A requires time proportional to f(n), Algorithm A is
said to be order f(n), and it is denoted as O(f(n)).
• The function f(n) is called the algorithm’s growth-rate function.
• Since the capital O is used in the notation, this notation is called
the Big O notation.
• If Algorithm A requires time proportional to n2
, it is O(n2
).
• If Algorithm A requires time proportional to n, it is O(n).
17. CENG 213 Data Structures 17
Definition of the Order of an Algorithm
Definition:
Algorithm A is order f(n) – denoted as O(f(n)) –
if constants k and n0 exist such that A requires
no more than k*f(n) time units to solve a problem
of size n ≥ n0.
• The requirement of n ≥ n0 in the definition of O(f(n)) formalizes
the notion of sufficiently large problems.
– In general, many values of k and n can satisfy this definition.
18. CENG 213 Data Structures 18
Order of an Algorithm
• If an algorithm requires n2
–3*n+10 seconds to solve a problem
size n. If constants k and n0 exist such that
k*n2
> n2
–3*n+10 for all n ≥ n0 .
the algorithm is order n2
(In fact, k is 3 and n0 is 2)
3*n2
> n2
–3*n+10 for all n ≥ 2 .
Thus, the algorithm requires no more than k*n2
time units for n ≥
n0 ,
So it is O(n2
)
19. CENG 213 Data Structures 19
Order of an Algorithm (cont.)
20. CENG 213 Data Structures 20
A Comparison of Growth-Rate Functions
21. CENG 213 Data Structures 21
A Comparison of Growth-Rate Functions (cont.)
22. CENG 213 Data Structures 22
Growth-Rate Functions
O(1) Time requirement is constant, and it is independent of the problem’s size.
O(log2n) Time requirement for a logarithmic algorithm increases increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2
) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3
) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n
) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.
23. CENG 213 Data Structures 23
Growth-Rate Functions
• If an algorithm takes 1 second to run with the problem size 8,
what is the time requirement (approximately) for that algorithm
with the problem size 16?
• If its order is:
O(1) T(n) = 1 second
O(log2n) T(n) = (1*log216) / log28 = 4/3 seconds
O(n) T(n) = (1*16) / 8 = 2 seconds
O(n*log2n) T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
O(n2
) T(n) = (1*162
) / 82
= 4 seconds
O(n3
) T(n) = (1*163
) / 83
= 8 seconds
O(2n
) T(n) = (1*216
) / 28
= 28
seconds = 256 seconds
24. CENG 213 Data Structures 24
Properties of Growth-Rate Functions
1. We can ignore low-order terms in an algorithm’s growth-rate
function.
– If an algorithm is O(n3
+4n2
+3n), it is also O(n3
).
– We only use the higher-order term as algorithm’s growth-rate function.
2. We can ignore a multiplicative constant in the higher-order term
of an algorithm’s growth-rate function.
– If an algorithm is O(5n3
), it is also O(n3
).
3. O(f(n)) + O(g(n)) = O(f(n)+g(n))
– We can combine growth-rate functions.
– If an algorithm is O(n3
) + O(4n), it is also O(n3
+4n2
) So, it is O(n3
).
– Similar rules hold for multiplication.
25. CENG 213 Data Structures 25
Some Mathematical Facts
• Some mathematical equalities are:
22
)1(*
...21
2
1
nnn
ni
n
i
≈
+
=+++=∑=
36
)12(*)1(*
...41
3
1
22 nnnn
ni
n
i
≈
++
=+++=∑=
122...2102
1
0
1
−=++++=∑
−
=
−
n
i
nni
26. CENG 213 Data Structures 26
Growth-Rate Functions – Example1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4
n
sum = sum + i; c5 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
= (c3+c4+c5)*n + (c1+c2+c3)
= a*n + b
So, the growth-rate function for this algorithm is O(n)
27. CENG 213 Data Structures 27
Growth-Rate Functions – Example2
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
= (c5+c6+c7)*n2
+ (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2
+ b*n + c
So, the growth-rate function for this algorithm is O(n2
)
28. CENG 213 Data Structures 28
Growth-Rate Functions – Example3
Cost Times
for (i=1; i<=n; i++) c1 n+1
for (j=1; j<=i; j++) c2
for (k=1; k<=j; k++) c3
x=x+1; c4
T(n) = c1*(n+1) + c2*( ) + c3* ( ) + c4*( )
= a*n3
+ b*n2
+ c*n + d
So, the growth-rate function for this algorithm is O(n3
)
∑=
+
n
j
j
1
)1(
∑∑= =
+
n
j
j
k
k
1 1
)1(
∑∑= =
n
j
j
k
k
1 1
∑=
+
n
j
j
1
)1( ∑∑= =
+
n
j
j
k
k
1 1
)1( ∑∑= =
n
j
j
k
k
1 1
29. CENG 213 Data Structures 29
Growth-Rate Functions – Recursive Algorithms
void hanoi(int n, char source, char dest, char spare)
{ Cost
if (n > 0) { c1
hanoi(n-1, source, spare, dest); c2
cout << "Move top disk from pole " << source c3
<< " to pole " << dest << endl;
hanoi(n-1, spare, dest, source); c4
} }
• The time-complexity function T(n) of a recursive algorithm is
defined in terms of itself, and this is known as recurrence equation
for T(n).
• To find the growth-rate function for a recursive algorithm, we have
to solve its recurrence relation.
30. CENG 213 Data Structures 30
Growth-Rate Functions – Hanoi Towers
• What is the cost of hanoi(n,’A’,’B’,’C’)?
when n=0
T(0) = c1
when n>0
T(n) = c1 + c2 + T(n-1) + c3 + c4 + T(n-1)
= 2*T(n-1) + (c1+c2+c3+c4)
= 2*T(n-1) + c recurrence equation for the growth-rate
function of hanoi-towers algorithm
• Now, we have to solve this recurrence equation to find the growth-rate
function of hanoi-towers algorithm
31. CENG 213 Data Structures 31
Growth-Rate Functions – Hanoi Towers (cont.)
• There are many methods to solve recurrence equations, but we will use a simple
method known as repeated substitutions.
T(n) = 2*T(n-1) + c
= 2 * (2*T(n-2)+c) + c
= 2 * (2* (2*T(n-3)+c) + c) + c
= 23
* T(n-3) + (22
+21
+20
)*c (assuming n>2)
when substitution repeated i-1th
times
= 2i
* T(n-i) + (2i-1
+ ... +21
+20
)*c
when i=n
= 2n
* T(0) + (2n-1
+ ... +21
+20
)*c
= 2n
* c1 + ( )*c
= 2n
* c1 + ( 2n
-1 )*c = 2n
*(c1+c) – c So, the growth rate function is O(2n
)
∑
−
=
1
0
2
n
i
i
32. CENG 213 Data Structures 32
What to Analyze
• An algorithm can require different times to solve different
problems of the same size.
– Eg. Searching an item in a list of n elements using sequential search. Cost:
1,2,...,n
• Worst-Case Analysis –The maximum amount of time that an
algorithm require to solve a problem of size n.
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.
• Best-Case Analysis –The minimum amount of time that an
algorithm require to solve a problem of size n.
– The best case behavior of an algorithm is NOT so useful.
• Average-Case Analysis –The average amount of time that an
algorithm require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an algorithm.
– We have to look at all possible data organizations of a given size n, and their
distribution probabilities of these organizations.
– Worst-case analysis is more common than average-case analysis.
33. CENG 213 Data Structures 33
What is Important?
• An array-based list retrieve operation is O(1), a linked-list-
based list retrieve operation is O(n).
• But insert and delete operations are much easier on a linked-list-
based list implementation.
When selecting the implementation of an Abstract Data Type
(ADT), we have to consider how frequently particular ADT
operations occur in a given application.
• If the problem size is always small, we can probably ignore the
algorithm’s efficiency.
– In this case, we should choose the simplest algorithm.
34. CENG 213 Data Structures 34
What is Important? (cont.)
• We have to weigh the trade-offs between an algorithm’s time
requirement and its memory requirements.
• We have to compare algorithms for both style and efficiency.
– The analysis should focus on gross differences in efficiency and not reward coding
tricks that save small amount of time.
– That is, there is no need for coding tricks if the gain is not too much.
– Easily understandable program is also important.
• Order-of-magnitude analysis focuses on large problems.
35. CENG 213 Data Structures 35
Sequential Search
int sequentialSearch(const int a[], int item, int n){
for (int i = 0; i < n && a[i]!= item; i++);
if (i == n)
return –1;
return i;
}
Unsuccessful Search: O(n)
Successful Search:
Best-Case: item is in the first location of the array O(1)
Worst-Case: item is in the last location of the array O(n)
Average-Case: The number of key comparisons 1, 2, ..., n
O(n)
n
nn
n
i
n
i 2/)( 2
1 +
=
∑=
36. CENG 213 Data Structures 36
Binary Search
int binarySearch(int a[], int size, int x) {
int low =0;
int high = size –1;
int mid; // mid will be the index of
// target when it’s found.
while (low <= high) {
mid = (low + high)/2;
if (a[mid] < x)
low = mid + 1;
else if (a[mid] > x)
high = mid – 1;
else
return mid;
}
return –1;
}
37. CENG 213 Data Structures 37
Binary Search – Analysis
• For an unsuccessful search:
– The number of iterations in the loop is log2n + 1
O(log2n)
• For a successful search:
– Best-Case: The number of iterations is 1. O(1)
– Worst-Case: The number of iterations is log2n +1 O(log2n)
– Average-Case: The avg. # of iterations < log2n O(log2n)
0 1 2 3 4 5 6 7 an array with size 8
3 2 3 1 3 2 3 4 # of iterations
The average # of iterations = 21/8 < log28
38. CENG 213 Data Structures 38
How much better is O(log2n)?
n O(log2n)
16 4
64 6
256 8
1024 (1KB) 10
16,384 14
131,072 17
262,144 18
524,288 19
1,048,576 (1MB) 20
1,073,741,824 (1GB) 30