SlideShare una empresa de Scribd logo
1 de 39
Divide and
 Conquer
Divide and Conquer

• divide the problem into a number of
  subproblems
• conquer the subproblems (solve them)
• combine the subproblem solutions to get the
               solution to the original problem

• Note: often the “conquer” step is done
  recursively
Divide-and-Conquer

A general methodology for using
 recursion to design efficient algorithms
It solves a problem by:
  – Diving the data into parts
  – Finding sub solutions for each of the parts
  – Constructing the final answer from the sub
    solutions
Divide and Conquer

• Based on dividing problem into
  subproblems
• Approach
   1. Divide problem into smaller subproblems
        Subproblems must be of same type
        Subproblems do not need to overlap
   2. Solve each subproblem recursively
   3. Combine solutions to solve original problem
• Usually contains two or more recursive
  calls
Divide-and-conquer technique

                   a problem of size n


   subproblem 1                           subproblem 2
     of size n/2                            of size n/2


   a solution to                          a solution to
   subproblem 1                           subproblem 2




                       a solution to
                   the original problem
Divide and Conquer Algorithms

• Based on dividing problem into subproblems
  – Divide problem into sub-problems
      Subproblems must be of same type
      Subproblems do not need to overlap
  – Conquer by solving sub-problems recursively. If
    the sub-problems are small enough, solve them
    in brute force fashion
  – Combine the solutions of sub-problems into a
    solution of the original problem (tricky part)
D-A-C

• For Divide-and-Conquer algorithms the
  running time is mainly affected by 3
  criteria:
• The number of sub-instances into which
  a problem is split.
• The ratio of initial problem size to sub-
  problem size.
• The number of steps required to divide
  the initial instance and to combine sub-
  solutions.
Algorithm for General Divide and Conquer
                 Sorting

• Algorithm for General Divide and Conquer
  Sorting
• Begin Algorithm
  Start Sort(L)
    If L has length greater than 1 then
    Begin
        Partition the list into two lists, high and low
          Start Sort(high)
          Start Sort(low)
          Combine high and low
     End
• End Algorithm
Analyzing Divide-and-Conquer
             Algorithms

• When an algorithm contains a recursive call
  to itself, its running time can often be describ
  ed by a recurrence equation which describes
  the overall running time on a problem of size
  n in terms of the running time on smaller inp
  uts.
• For divide-and-conquer algorithms, we get
  recurrences that look like:
•
• T(n)         {
               =
                     Θ(1)
                     aT(n/b) +D(n) +C(n)
                                            if n < c
Analyzing Divide-and-Conquer Algorithms
                 (cont.)
• where
• a = the number of subproblems we break
  the problem into
• n/b = the size of the subproblems (in
  terms of n)
• D(n) is the time to divide the problem of
  size n into the subproblems
• C(n) is the time to combine the
  subproblem solutions to get the answer for
  the problem of size n
The algorithm

• Lets assume the following array
           2       6       7       3       5       6 9       2       4       1



• We divide the values into pairs
           2       6       7       3       5       6    9    2       4       1



• We sort each pair
               2       6       3       7       5       6 2       9       1   4




• Get the first pair (both lowest values!)
The algorithm (2)

• We compare these values (2 and 6) with the
  values of the next pair (3 and 7)
             2   6   3   7   5   6   2   9   1   4



  – Lowest 2,3
• The next one (5 and 6)
  – Lowest 2,3
• The next one (2 and 9)
  – Lowest 2,2
• The next one (1 and 4)
  – Lowest 1,2
Example: Divide and Conquer
•   Binary Search
•   Heap Construction
•   Tower of Hanoi
•   Exponentiation
    – Fibonnacci Sequence
•   Quick Sort
•   Merge Sort
•   Multiplying large Integers
•   Matrix Multiplications
•   Closest Pairs
Quicksort
Design
 Follows the divide-and-conquer paradigm.
 Divide: Partition (separate) the array A[p..r] into two
  (possibly nonempty) subarrays A[p..q–1] and A[q+1..r].
    Each element in A[p..q–1] ≤ A[q].
    A[q] ≤ each element in A[q+1..r].
    Index q is computed as part of the partitioning
     procedure.
 Conquer: Sort the two subarrays A[p..q–1] &
  A[q+1..r] by recursive calls to quicksort.
 Combine: Since the subarrays are sorted in place –
  no work is needed to combine them.
 How do the divide and combine steps of quicksort
  compare with those of merge sort?
Pseudocode
   Quicksort(A, p, r)
   Quicksort(A, p, r)                    Partition(A, p, r)
                                          Partition(A, p, r)
       if pp< rrthen
        if < then                             x:= A[r],
                                               x:= A[r],
            qq:= Partition(A, p, r);
               := Partition(A, p, r);          i i:=p – 1;
                                                   :=p – 1;
            Quicksort(A, p, qq––1);
             Quicksort(A, p,       1);        for jj:= ppto rr––11do
                                               for := to              do
            Quicksort(A, qq+ 1, r)
             Quicksort(A, + 1, r)                    if A[j] ≤ xxthen
                                                      if A[j] ≤ then
       fi
        fi                                                ii:= ii+ 1;
                                                             := + 1;
                                                          A[i] ↔ A[j]
                                                           A[i] ↔ A[j]
 A[p..r]                                             fi
                                                      fi
                                              od;
                                               od;
            5                                 A[i + 1] ↔ A[r];
                                               A[i + 1] ↔ A[r];
                                              return ii+ 11
                                               return +
                A[p..q – 1] A[q+1..r]

Partition                   5

                     ≤5         ≥5
Example
                    p                  r
initially:          2 5 8 3 9 4 1 7 10 6   note: pivot (x) = 6
                  i j

next iteration:    2 5 8 3 9 4 1 7 10 6
                   i j                       Partition(A, p, r)
                                              Partition(A, p, r)
                                                  x, ii := A[r], pp––1;
                                                   x, := A[r],         1;
next iteration:    2 5 8 3 9 4 1 7 10 6           for jj:= ppto rr––11do
                                                   for := to             do
                     i j                                if A[j] ≤ xxthen
                                                         if A[j] ≤ then
                                                             ii:= ii+ 1;
                                                                := + 1;
next iteration:    2 5 8 3 9 4 1 7 10 6                      A[i] ↔ A[j]
                                                              A[i] ↔ A[j]
                     i   j                              fi
                                                         fi
                                                  od;
                                                   od;
next iteration:    2 5 3 8 9 4 1 7 10 6
                                                  A[i + 1] ↔ A[r];
                                                   A[i + 1] ↔ A[r];
                       i   j
                                                  return ii+ 1
                                                   return + 1
Example (Continued)
next iteration:     2 5 3 8 9 4 1 7 10 6
                        i   j

next iteration:     2 5 3 8 9 4 1 7 10 6
                        i     j

next iteration:     2 5 3 4 9 8 1 7 10 6
                                           Partition(A, p, r)
                                            Partition(A, p, r)
                          i     j
                                                x, ii := A[r], pp––1;
                                                 x, := A[r],         1;
next iteration:     2 5 3 4 1 8 9 7 10 6        for jj:= ppto rr––11do
                                                 for := to             do
                            i     j                   if A[j] ≤ xxthen
                                                       if A[j] ≤ then
                                                           ii:= ii+ 1;
                                                              := + 1;
next iteration:     2 5 3 4 1 8 9 7 10 6                   A[i] ↔ A[j]
                                                            A[i] ↔ A[j]
                            i        j                fi
                                                       fi
                                                od;
                                                 od;
next iteration:     2 5 3 4 1 8 9 7 10 6
                                                A[i + 1] ↔ A[r];
                                                 A[i + 1] ↔ A[r];
                            i          j
                                                return ii+ 1
                                                 return + 1
after final swap:   2 5 3 4 1 6 9 7 10 8
                            i          j
Partitioning
    Select the last element A[r] in the subarray A[p..r] as
     the pivot – the element around which to partition.
    As the procedure executes, the array is partitioned
     into four (possibly empty) regions.
    1.   A[p..i] — All entries in this region are ≤ pivot.
    2.   A[i+1..j – 1] — All entries in this region are > pivot.
    3.   A[r] = pivot.
    4.   A[j..r – 1] — Not known how they compare to pivot.
    The above hold before each iteration of the for loop,
     and constitute a loop invariant. (4 is not part of the LI.)
Correctness of Partition

Use loop invariant.
Initialization:
  – Before first iteration
     • A[p..i] and A[i+1..j – 1] are empty – Conds. 1 and 2 are
       satisfied (trivially).              Partition(A, p, r)
                                            Partition(A, p, r)
                                                x, i := A[r], p – 1;
     • r is the index of the pivot – Cond. 3 is forij := p to r p – 1;
                                                 x, := A[r],
                                                satisfied. – 1 do
                                                   for j := p to r – 1 do
Maintenance:                                          if A[j] ≤ xxthen
                                                        if A[j] ≤ then
                                                            ii:= ii+ 1;
                                                               := + 1;
  – Case 1: A[j] > x                                        A[i] ↔ A[j]
                                                             A[i] ↔ A[j]
     • Increment j only.                               fi
                                                        fi
                                                  od;
                                                   od;
     • LI is maintained.                          A[i + 1] ↔ A[r];
                                                   A[i + 1] ↔ A[r];
                                                  return ii+ 11
                                                   return +
Correctness of Partition
Case 1:

p          i         j         r
                     >x        x

      ≤x        >x
p          i              j    r
                               x

      ≤x        >x
Correctness of Partition
• Case 2: A[j] ≤ x
   – Increment i                           – A[r] is unaltered.
   – Swap A[i] and A[j]                        • Condition 3 is maintained.
       • Condition 1 is maintained.
   – Increment j
       • Condition 2 is maintained.


   p                      i                     j                         r
                                               ≤x                             x

            ≤x                        >x
   p                          i                       j                   r
                                                                              x

            ≤x                        >x
Correctness of Partition
 Termination:
  – When the loop terminates, j = r, so all elements in A are
    partitioned into one of the three cases:
     • A[p..i] ≤ pivot
     • A[i+1..j – 1] > pivot
     • A[r] = pivot
 The last two lines swap A[i+1] and A[r].
  – Pivot moves from the end of the array to between the
    two subarrays.
  – Thus, procedure partition correctly performs the divide
    step.
Complexity of Partition

• PartitionTime(n) is given by the number of
  iterations in the for loop.
∀ Θ(n) : n = r – p + 1.          Partition(A, p, r)
                                  Partition(A, p, r)
                                         x, ii := A[r], pp––1;
                                          x, := A[r],         1;
                                         for jj:= ppto rr––11do
                                          for := to             do
                                               if A[j] ≤ xxthen
                                                if A[j] ≤ then
                                                    ii:= ii+ 1;
                                                       := + 1;
                                                    A[i] ↔ A[j]
                                                     A[i] ↔ A[j]
                                               fi
                                                fi
                                         od;
                                          od;
                                         A[i + 1] ↔ A[r];
                                          A[i + 1] ↔ A[r];
                                         return ii+ 1
                                          return + 1
Algorithm Performance
•     Running time of quicksort depends on whether the
    partitioning is balanced or not.

• Worst-Case Partitioning (Unbalanced Partitions):
    – Occurs when every call to partition results in the most
      unbalanced partition.
    – Partition is most unbalanced when
       • Subproblem 1 is of size n – 1, and subproblem 2 is of size 0
         or vice versa.
       • pivot ≥ every element in A[p..r – 1] or pivot < every element in
         A[p..r – 1].
    – Every call to partition is most unbalanced when
       • Array A[1..n] is sorted or reverse sorted!
Worst-case Partition Analysis
    Recursion tree for
    worst-case partition
             n


            n–1            •     Running time for worst-case
                               partitions at each recursive level:
                           •   T(n) = T(n – 1) + T(0) +
            n–2                PartitionTime(n)
n                          •          = T(n – 1) + Θ(n)
            n–3            •          = ∑k=1 to nΘ(k)
                           •          = Θ(∑k=1 to n k )
              2            •          = Θ(n2)
                           •
              1
Best-case Partitioning

• Size of each subproblem ≤ n/2.
  – One of the subproblems is of size n/2
  – The other is of size n/2 −1.
• Recurrence for running time
  – T(n) ≤ 2T(n/2) + PartitionTime(n)
          = 2T(n/2) + Θ(n)
• T(n) = Θ(n lg n)
Recursion Tree for Best-case
                 Partition
                         cn                            cn



                  cn/2        cn/2                     cn

lg n
           cn/4        cn/4 cn/4     cn/4              cn




       c      c    c               c c c                cn
                                            Total   : O(n lg n)
Conclusion

• • Divide and conquer is just one of several
• powerful techniques for algorithm design.
• • Divide-and-conquer algorithms can be
  analyzed using recurrences and the
  master method (so practice this math).
• • Can lead to more efficient algorithms
Divide and Conquer (Merge Sort)
Divide and Conquer (Merge Sort)
Divide and Conquer

• Recursive in structure
  – Divide the problem into sub-problems
    that are similar to the original but smaller
    in size
  – Conquer the sub-problems by solving
    them recursively. If they are small
    enough, just solve them in a
    straightforward manner.
  – Combine the solutions to create a
    solution to the original problem
An Example: Merge Sort

• Sorting Problem: Sort a sequence of n
  elements into non-decreasing order.

• Divide: Divide the n-element sequence to
  be sorted into two subsequences of n/2
  elements each
• Conquer: Sort the two subsequences
  recursively using merge sort.
• Combine: Merge the two sorted
  subsequences to produce the sorted
Merge Sort – Example
       Original Sequence                    Sorted Sequence
 18 26 32 6 43 15 9            1    1   6    9   15 18 26 32 43


18 26 32 6     43 15 9         1   6 18 26 32          1      9   15 43
                                                                     43


18 26 32 6     43 15       9   1   18 26     6 32     15 43        1   9


18 26 32   6   43 15       9   1   18 26 32      6    43 15       9    1

18 26 32   6   43 15       9   1
Merge-Sort (A, p, r)

• INPUT: a sequence of n numbers stored in
  array A
MergeSort (A, p, r) // sort A[p..r] by divide & conquer
• if p < r
1 OUTPUT: an ordered sequence of n
2 numbers (p+r)/2
     then q ←
3        MergeSort (A, p, q)
4        MergeSort (A, q+1, r)
5        Merge (A, p, q, r) // merges A[p..q] with A[q+1..r]

Initial Call: MergeSort(A, 1, n)
Procedure Merge
•   Merge(A, p, q, r)
•   1 n1 ← q – p + 1             Input: Array containing
•   2 n2 ← r – q                 sorted subarrays A[p..q]
   for i ← 1 to n1              and A[q+1..r].
     do L[i] ← A[p + i – 1]     Output: Merged sorted
   for j ← 1 to n2
                                 subarray in A[p..r].
     do R[j] ← A[q + j]
   L[n1+1] ← ∞
   R[n2+1] ← ∞
   i←1
   j←1                        Sentinels, to avoid having to
   for k ←p to r              check if either subarray is
     do if L[i] ≤ R[j]        fully copied at each step.
         then A[k] ← L[i]
               i←i+1
         else A[k] ← R[j]
               j←j+1
Merge – Example
A           …        6
                     1   8 26 32 26 32 42 43
                         6 8 9 1 9                                           …
                     k   k k     k       k       k       k       k       k



    L       6    8 26 32 ∞           R       1       9 42 43                 ∞
        i        i   i   i   i               j       j       j       j       j
Correctness of Merge
•   Merge(A, p, q, r)
                               Loop Invariant for the for loop
•   1 n1 ← q – p + 1           At the start of each iteration of the
•   2 n2 ← r – q               for loop:
   for i ← 1 to n1                           Subarray A[p..k – 1]
     do L[i] ← A[p + i – 1]   contains the k – p smallest elements
   for j ← 1 to n2            of L and R in sorted order.
                               L[i] and R[j] are the smallest elements of
     do R[j] ← A[q + j]       L and R that have not been copied back into
   L[n1+1] ← ∞                A.
   R[n2+1] ← ∞
   i←1                        Initialization:
   j←1                        Before the first iteration:
                               •A[p..k – 1] is empty.
   for k ←p to r
                               •i = j = 1.
     do if L[i] ≤ R[j]
                               •L[1] and R[1] are the smallest
         then A[k] ← L[i]      elements of L and R not copied to A.
               i←i+1
         else A[k] ← R[j]
Correctness of Merge
•   Merge(A, p, q, r)          Maintenance:
•   1 n1 ← q – p + 1           Case 1: L[i] ≤ R[j]
                               •By LI, A contains p – k smallest elements
•   2 n2 ← r – q
                               of L and R in sorted order.
   for i ← 1 to n1            •By LI, L[i] and R[j] are the smallest elements
     do L[i] ← A[p + i – 1]   of L and R not yet copied into A.
   for j ← 1 to n2            •Line 13 results in A containing p – k + 1
                               smallest elements (again in sorted order).
     do R[j] ← A[q + j]       Incrementing i and k reestablishes the LI for
   L[n1+1] ← ∞                the next iteration.
   R[n2+1] ← ∞                Similarly for L[i] > R[j].
   i←1                        Termination:
   j←1                        •On termination, k = r + 1.
   for k ←p to r              •By LI, A contains r – p + 1 smallest
     do if L[i] ≤ R[j]         elements of L and R in sorted order.
         then A[k] ← L[i]     •L and R together contain r – p + 3 elements.
                                All but the two sentinels have been copied
               i←i+1
                                back into A.
         else A[k] ← R[j]

Analysis of Merge Sort

•   Running time T(n) of Merge Sort:
•   Divide: computing the middle takes Θ(1)
•   Conquer: solving 2 subproblems takes 2T(n/2)
•   Combine: merging n elements takes Θ(n)
•   Total:
            T(n) = Θ(1)                  if n = 1
               T(n) = 2T(n/2) + Θ(n) if n > 1
    ⇒ T(n) = Θ(n lg n) (CLRS, Chapter 4)

Más contenido relacionado

La actualidad más candente

daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
hodcsencet
 

La actualidad más candente (20)

String matching algorithms
String matching algorithmsString matching algorithms
String matching algorithms
 
Binary search in data structure
Binary search in data structureBinary search in data structure
Binary search in data structure
 
Algorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms IAlgorithms Lecture 2: Analysis of Algorithms I
Algorithms Lecture 2: Analysis of Algorithms I
 
Data Structures : hashing (1)
Data Structures : hashing (1)Data Structures : hashing (1)
Data Structures : hashing (1)
 
Recursion tree method
Recursion tree methodRecursion tree method
Recursion tree method
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
 
Merge Sort
Merge SortMerge Sort
Merge Sort
 
Master method
Master method Master method
Master method
 
Tree - Data Structure
Tree - Data StructureTree - Data Structure
Tree - Data Structure
 
Master theorem
Master theoremMaster theorem
Master theorem
 
Quick sort-Data Structure
Quick sort-Data StructureQuick sort-Data Structure
Quick sort-Data Structure
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
asymptotic notation
asymptotic notationasymptotic notation
asymptotic notation
 
Merge sort and quick sort
Merge sort and quick sortMerge sort and quick sort
Merge sort and quick sort
 
Divide and Conquer
Divide and ConquerDivide and Conquer
Divide and Conquer
 
3.9 external sorting
3.9 external sorting3.9 external sorting
3.9 external sorting
 
Asymptotic Notation
Asymptotic NotationAsymptotic Notation
Asymptotic Notation
 
Sorting
SortingSorting
Sorting
 
Hashing
HashingHashing
Hashing
 
Daa notes 1
Daa notes 1Daa notes 1
Daa notes 1
 

Destacado

Divide and conquer 1
Divide and conquer 1Divide and conquer 1
Divide and conquer 1
Kumar
 
Recurrence relation
Recurrence relationRecurrence relation
Recurrence relation
Ohgyun Ahn
 
A unique sorting algorithm with linear time &amp; space complexity
A unique sorting algorithm with linear time &amp; space complexityA unique sorting algorithm with linear time &amp; space complexity
A unique sorting algorithm with linear time &amp; space complexity
eSAT Journals
 
A solution to the stable marriage problem
A solution to the stable marriage problemA solution to the stable marriage problem
A solution to the stable marriage problem
Tùng Thanh
 

Destacado (20)

Divide and Conquer - Part 1
Divide and Conquer - Part 1Divide and Conquer - Part 1
Divide and Conquer - Part 1
 
Divide and conquer 1
Divide and conquer 1Divide and conquer 1
Divide and conquer 1
 
02 Analysis of Algorithms: Divide and Conquer
02 Analysis of Algorithms: Divide and Conquer02 Analysis of Algorithms: Divide and Conquer
02 Analysis of Algorithms: Divide and Conquer
 
5.2 divide and conquer
5.2 divide and conquer5.2 divide and conquer
5.2 divide and conquer
 
Lecture 5 6_7 - divide and conquer and method of solving recurrences
Lecture 5 6_7 - divide and conquer and method of solving recurrencesLecture 5 6_7 - divide and conquer and method of solving recurrences
Lecture 5 6_7 - divide and conquer and method of solving recurrences
 
Divide and Conquer - Part II - Quickselect and Closest Pair of Points
Divide and Conquer - Part II - Quickselect and Closest Pair of PointsDivide and Conquer - Part II - Quickselect and Closest Pair of Points
Divide and Conquer - Part II - Quickselect and Closest Pair of Points
 
Mergesort
MergesortMergesort
Mergesort
 
08 decrease and conquer spring 15
08 decrease and conquer spring 1508 decrease and conquer spring 15
08 decrease and conquer spring 15
 
09d transform & conquer spring2015
09d transform & conquer spring201509d transform & conquer spring2015
09d transform & conquer spring2015
 
Recurrence relation
Recurrence relationRecurrence relation
Recurrence relation
 
Divide and Conquer
Divide and ConquerDivide and Conquer
Divide and Conquer
 
Divide and Conquer
Divide and ConquerDivide and Conquer
Divide and Conquer
 
15 - 12 Feb - Stable Matchings
15 - 12 Feb - Stable Matchings15 - 12 Feb - Stable Matchings
15 - 12 Feb - Stable Matchings
 
The gale shapley algorithm
The gale shapley algorithmThe gale shapley algorithm
The gale shapley algorithm
 
Koptreport
KoptreportKoptreport
Koptreport
 
Talwalkar mlconf (1)
Talwalkar mlconf (1)Talwalkar mlconf (1)
Talwalkar mlconf (1)
 
A unique sorting algorithm with linear time &amp; space complexity
A unique sorting algorithm with linear time &amp; space complexityA unique sorting algorithm with linear time &amp; space complexity
A unique sorting algorithm with linear time &amp; space complexity
 
A solution to the stable marriage problem
A solution to the stable marriage problemA solution to the stable marriage problem
A solution to the stable marriage problem
 
Chap05alg
Chap05algChap05alg
Chap05alg
 
Divide and Conquer
Divide and ConquerDivide and Conquer
Divide and Conquer
 

Similar a Dinive conquer algorithm

presentation_mergesortquicksort_1458716068_193111.ppt
presentation_mergesortquicksort_1458716068_193111.pptpresentation_mergesortquicksort_1458716068_193111.ppt
presentation_mergesortquicksort_1458716068_193111.ppt
ajiths82
 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statistics
ashu29
 
Lca seminar modified
Lca seminar modifiedLca seminar modified
Lca seminar modified
Inbok Lee
 

Similar a Dinive conquer algorithm (20)

CSE680-07QuickSort.pptx
CSE680-07QuickSort.pptxCSE680-07QuickSort.pptx
CSE680-07QuickSort.pptx
 
Divide and conquer
Divide and conquerDivide and conquer
Divide and conquer
 
presentation_mergesortquicksort_1458716068_193111.ppt
presentation_mergesortquicksort_1458716068_193111.pptpresentation_mergesortquicksort_1458716068_193111.ppt
presentation_mergesortquicksort_1458716068_193111.ppt
 
MergesortQuickSort.ppt
MergesortQuickSort.pptMergesortQuickSort.ppt
MergesortQuickSort.ppt
 
Dsoop (co 221) 1
Dsoop (co 221) 1Dsoop (co 221) 1
Dsoop (co 221) 1
 
Aaex4 group2(中英夾雜)
Aaex4 group2(中英夾雜)Aaex4 group2(中英夾雜)
Aaex4 group2(中英夾雜)
 
Admission in india 2015
Admission in india 2015Admission in india 2015
Admission in india 2015
 
03 dc
03 dc03 dc
03 dc
 
quick and merge.pptx
quick and merge.pptxquick and merge.pptx
quick and merge.pptx
 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statistics
 
How to design a linear control system
How to design a linear control systemHow to design a linear control system
How to design a linear control system
 
Practical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient ApportionmentPractical and Worst-Case Efficient Apportionment
Practical and Worst-Case Efficient Apportionment
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
module2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdfmodule2_dIVIDEncONQUER_2022.pdf
module2_dIVIDEncONQUER_2022.pdf
 
dynamic programming Rod cutting class
dynamic programming Rod cutting classdynamic programming Rod cutting class
dynamic programming Rod cutting class
 
Design and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture NotesDesign and Analysis of Algorithms Lecture Notes
Design and Analysis of Algorithms Lecture Notes
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.ppt
 
Lca seminar modified
Lca seminar modifiedLca seminar modified
Lca seminar modified
 
Numerical Linear Algebra for Data and Link Analysis.
Numerical Linear Algebra for Data and Link Analysis.Numerical Linear Algebra for Data and Link Analysis.
Numerical Linear Algebra for Data and Link Analysis.
 
Real number system full
Real  number  system fullReal  number  system full
Real number system full
 

Más de Mohd Arif

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
Mohd Arif
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
Mohd Arif
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
Mohd Arif
 
Project identification
Project identificationProject identification
Project identification
Mohd Arif
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
Mohd Arif
 
Presentation
PresentationPresentation
Presentation
Mohd Arif
 
Pointers in c
Pointers in cPointers in c
Pointers in c
Mohd Arif
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
Mohd Arif
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
Mohd Arif
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
Mohd Arif
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
Mohd Arif
 
Network management
Network managementNetwork management
Network management
Mohd Arif
 
Networing basics
Networing basicsNetworing basics
Networing basics
Mohd Arif
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
Mohd Arif
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
Mohd Arif
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
Mohd Arif
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
Mohd Arif
 

Más de Mohd Arif (20)

Bootp and dhcp
Bootp and dhcpBootp and dhcp
Bootp and dhcp
 
Arp and rarp
Arp and rarpArp and rarp
Arp and rarp
 
User datagram protocol
User datagram protocolUser datagram protocol
User datagram protocol
 
Project identification
Project identificationProject identification
Project identification
 
Project evalaution techniques
Project evalaution techniquesProject evalaution techniques
Project evalaution techniques
 
Presentation
PresentationPresentation
Presentation
 
Pointers in c
Pointers in cPointers in c
Pointers in c
 
Peer to-peer
Peer to-peerPeer to-peer
Peer to-peer
 
Overview of current communications systems
Overview of current communications systemsOverview of current communications systems
Overview of current communications systems
 
Overall 23 11_2007_hdp
Overall 23 11_2007_hdpOverall 23 11_2007_hdp
Overall 23 11_2007_hdp
 
Objectives of budgeting
Objectives of budgetingObjectives of budgeting
Objectives of budgeting
 
Network management
Network managementNetwork management
Network management
 
Networing basics
Networing basicsNetworing basics
Networing basics
 
Loaders
LoadersLoaders
Loaders
 
Lists
ListsLists
Lists
 
Iris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platformIris ngx next generation ip based switching platform
Iris ngx next generation ip based switching platform
 
Ip sec and ssl
Ip sec and  sslIp sec and  ssl
Ip sec and ssl
 
Ip security in i psec
Ip security in i psecIp security in i psec
Ip security in i psec
 
Intro to comp. hardware
Intro to comp. hardwareIntro to comp. hardware
Intro to comp. hardware
 
Heap sort
Heap sortHeap sort
Heap sort
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 

Último (20)

Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live StreamsTop 5 Benefits OF Using Muvi Live Paywall For Live Streams
Top 5 Benefits OF Using Muvi Live Paywall For Live Streams
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 

Dinive conquer algorithm

  • 2. Divide and Conquer • divide the problem into a number of subproblems • conquer the subproblems (solve them) • combine the subproblem solutions to get the solution to the original problem • Note: often the “conquer” step is done recursively
  • 3. Divide-and-Conquer A general methodology for using recursion to design efficient algorithms It solves a problem by: – Diving the data into parts – Finding sub solutions for each of the parts – Constructing the final answer from the sub solutions
  • 4. Divide and Conquer • Based on dividing problem into subproblems • Approach 1. Divide problem into smaller subproblems Subproblems must be of same type Subproblems do not need to overlap 2. Solve each subproblem recursively 3. Combine solutions to solve original problem • Usually contains two or more recursive calls
  • 5. Divide-and-conquer technique a problem of size n subproblem 1 subproblem 2 of size n/2 of size n/2 a solution to a solution to subproblem 1 subproblem 2 a solution to the original problem
  • 6. Divide and Conquer Algorithms • Based on dividing problem into subproblems – Divide problem into sub-problems Subproblems must be of same type Subproblems do not need to overlap – Conquer by solving sub-problems recursively. If the sub-problems are small enough, solve them in brute force fashion – Combine the solutions of sub-problems into a solution of the original problem (tricky part)
  • 7. D-A-C • For Divide-and-Conquer algorithms the running time is mainly affected by 3 criteria: • The number of sub-instances into which a problem is split. • The ratio of initial problem size to sub- problem size. • The number of steps required to divide the initial instance and to combine sub- solutions.
  • 8. Algorithm for General Divide and Conquer Sorting • Algorithm for General Divide and Conquer Sorting • Begin Algorithm Start Sort(L) If L has length greater than 1 then Begin Partition the list into two lists, high and low Start Sort(high) Start Sort(low) Combine high and low End • End Algorithm
  • 9. Analyzing Divide-and-Conquer Algorithms • When an algorithm contains a recursive call to itself, its running time can often be describ ed by a recurrence equation which describes the overall running time on a problem of size n in terms of the running time on smaller inp uts. • For divide-and-conquer algorithms, we get recurrences that look like: • • T(n) { = Θ(1) aT(n/b) +D(n) +C(n) if n < c
  • 10. Analyzing Divide-and-Conquer Algorithms (cont.) • where • a = the number of subproblems we break the problem into • n/b = the size of the subproblems (in terms of n) • D(n) is the time to divide the problem of size n into the subproblems • C(n) is the time to combine the subproblem solutions to get the answer for the problem of size n
  • 11. The algorithm • Lets assume the following array 2 6 7 3 5 6 9 2 4 1 • We divide the values into pairs 2 6 7 3 5 6 9 2 4 1 • We sort each pair 2 6 3 7 5 6 2 9 1 4 • Get the first pair (both lowest values!)
  • 12. The algorithm (2) • We compare these values (2 and 6) with the values of the next pair (3 and 7) 2 6 3 7 5 6 2 9 1 4 – Lowest 2,3 • The next one (5 and 6) – Lowest 2,3 • The next one (2 and 9) – Lowest 2,2 • The next one (1 and 4) – Lowest 1,2
  • 13. Example: Divide and Conquer • Binary Search • Heap Construction • Tower of Hanoi • Exponentiation – Fibonnacci Sequence • Quick Sort • Merge Sort • Multiplying large Integers • Matrix Multiplications • Closest Pairs
  • 15. Design  Follows the divide-and-conquer paradigm.  Divide: Partition (separate) the array A[p..r] into two (possibly nonempty) subarrays A[p..q–1] and A[q+1..r].  Each element in A[p..q–1] ≤ A[q].  A[q] ≤ each element in A[q+1..r].  Index q is computed as part of the partitioning procedure.  Conquer: Sort the two subarrays A[p..q–1] & A[q+1..r] by recursive calls to quicksort.  Combine: Since the subarrays are sorted in place – no work is needed to combine them.  How do the divide and combine steps of quicksort compare with those of merge sort?
  • 16. Pseudocode Quicksort(A, p, r) Quicksort(A, p, r) Partition(A, p, r) Partition(A, p, r) if pp< rrthen if < then x:= A[r], x:= A[r], qq:= Partition(A, p, r); := Partition(A, p, r); i i:=p – 1; :=p – 1; Quicksort(A, p, qq––1); Quicksort(A, p, 1); for jj:= ppto rr––11do for := to do Quicksort(A, qq+ 1, r) Quicksort(A, + 1, r) if A[j] ≤ xxthen if A[j] ≤ then fi fi ii:= ii+ 1; := + 1; A[i] ↔ A[j] A[i] ↔ A[j] A[p..r] fi fi od; od; 5 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 11 return + A[p..q – 1] A[q+1..r] Partition 5 ≤5 ≥5
  • 17. Example p r initially: 2 5 8 3 9 4 1 7 10 6 note: pivot (x) = 6 i j next iteration: 2 5 8 3 9 4 1 7 10 6 i j Partition(A, p, r) Partition(A, p, r) x, ii := A[r], pp––1; x, := A[r], 1; next iteration: 2 5 8 3 9 4 1 7 10 6 for jj:= ppto rr––11do for := to do i j if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; next iteration: 2 5 8 3 9 4 1 7 10 6 A[i] ↔ A[j] A[i] ↔ A[j] i j fi fi od; od; next iteration: 2 5 3 8 9 4 1 7 10 6 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; i j return ii+ 1 return + 1
  • 18. Example (Continued) next iteration: 2 5 3 8 9 4 1 7 10 6 i j next iteration: 2 5 3 8 9 4 1 7 10 6 i j next iteration: 2 5 3 4 9 8 1 7 10 6 Partition(A, p, r) Partition(A, p, r) i j x, ii := A[r], pp––1; x, := A[r], 1; next iteration: 2 5 3 4 1 8 9 7 10 6 for jj:= ppto rr––11do for := to do i j if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; next iteration: 2 5 3 4 1 8 9 7 10 6 A[i] ↔ A[j] A[i] ↔ A[j] i j fi fi od; od; next iteration: 2 5 3 4 1 8 9 7 10 6 A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; i j return ii+ 1 return + 1 after final swap: 2 5 3 4 1 6 9 7 10 8 i j
  • 19. Partitioning  Select the last element A[r] in the subarray A[p..r] as the pivot – the element around which to partition.  As the procedure executes, the array is partitioned into four (possibly empty) regions. 1. A[p..i] — All entries in this region are ≤ pivot. 2. A[i+1..j – 1] — All entries in this region are > pivot. 3. A[r] = pivot. 4. A[j..r – 1] — Not known how they compare to pivot.  The above hold before each iteration of the for loop, and constitute a loop invariant. (4 is not part of the LI.)
  • 20. Correctness of Partition Use loop invariant. Initialization: – Before first iteration • A[p..i] and A[i+1..j – 1] are empty – Conds. 1 and 2 are satisfied (trivially). Partition(A, p, r) Partition(A, p, r) x, i := A[r], p – 1; • r is the index of the pivot – Cond. 3 is forij := p to r p – 1; x, := A[r], satisfied. – 1 do for j := p to r – 1 do Maintenance: if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; – Case 1: A[j] > x A[i] ↔ A[j] A[i] ↔ A[j] • Increment j only. fi fi od; od; • LI is maintained. A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 11 return +
  • 21. Correctness of Partition Case 1: p i j r >x x ≤x >x p i j r x ≤x >x
  • 22. Correctness of Partition • Case 2: A[j] ≤ x – Increment i – A[r] is unaltered. – Swap A[i] and A[j] • Condition 3 is maintained. • Condition 1 is maintained. – Increment j • Condition 2 is maintained. p i j r ≤x x ≤x >x p i j r x ≤x >x
  • 23. Correctness of Partition  Termination: – When the loop terminates, j = r, so all elements in A are partitioned into one of the three cases: • A[p..i] ≤ pivot • A[i+1..j – 1] > pivot • A[r] = pivot  The last two lines swap A[i+1] and A[r]. – Pivot moves from the end of the array to between the two subarrays. – Thus, procedure partition correctly performs the divide step.
  • 24. Complexity of Partition • PartitionTime(n) is given by the number of iterations in the for loop. ∀ Θ(n) : n = r – p + 1. Partition(A, p, r) Partition(A, p, r) x, ii := A[r], pp––1; x, := A[r], 1; for jj:= ppto rr––11do for := to do if A[j] ≤ xxthen if A[j] ≤ then ii:= ii+ 1; := + 1; A[i] ↔ A[j] A[i] ↔ A[j] fi fi od; od; A[i + 1] ↔ A[r]; A[i + 1] ↔ A[r]; return ii+ 1 return + 1
  • 25. Algorithm Performance • Running time of quicksort depends on whether the partitioning is balanced or not. • Worst-Case Partitioning (Unbalanced Partitions): – Occurs when every call to partition results in the most unbalanced partition. – Partition is most unbalanced when • Subproblem 1 is of size n – 1, and subproblem 2 is of size 0 or vice versa. • pivot ≥ every element in A[p..r – 1] or pivot < every element in A[p..r – 1]. – Every call to partition is most unbalanced when • Array A[1..n] is sorted or reverse sorted!
  • 26. Worst-case Partition Analysis Recursion tree for worst-case partition n n–1 • Running time for worst-case partitions at each recursive level: • T(n) = T(n – 1) + T(0) + n–2 PartitionTime(n) n • = T(n – 1) + Θ(n) n–3 • = ∑k=1 to nΘ(k) • = Θ(∑k=1 to n k ) 2 • = Θ(n2) • 1
  • 27. Best-case Partitioning • Size of each subproblem ≤ n/2. – One of the subproblems is of size n/2 – The other is of size n/2 −1. • Recurrence for running time – T(n) ≤ 2T(n/2) + PartitionTime(n) = 2T(n/2) + Θ(n) • T(n) = Θ(n lg n)
  • 28. Recursion Tree for Best-case Partition cn cn cn/2 cn/2 cn lg n cn/4 cn/4 cn/4 cn/4 cn c c c c c c cn Total : O(n lg n)
  • 29. Conclusion • • Divide and conquer is just one of several • powerful techniques for algorithm design. • • Divide-and-conquer algorithms can be analyzed using recurrences and the master method (so practice this math). • • Can lead to more efficient algorithms
  • 30. Divide and Conquer (Merge Sort) Divide and Conquer (Merge Sort)
  • 31. Divide and Conquer • Recursive in structure – Divide the problem into sub-problems that are similar to the original but smaller in size – Conquer the sub-problems by solving them recursively. If they are small enough, just solve them in a straightforward manner. – Combine the solutions to create a solution to the original problem
  • 32. An Example: Merge Sort • Sorting Problem: Sort a sequence of n elements into non-decreasing order. • Divide: Divide the n-element sequence to be sorted into two subsequences of n/2 elements each • Conquer: Sort the two subsequences recursively using merge sort. • Combine: Merge the two sorted subsequences to produce the sorted
  • 33. Merge Sort – Example Original Sequence Sorted Sequence 18 26 32 6 43 15 9 1 1 6 9 15 18 26 32 43 18 26 32 6 43 15 9 1 6 18 26 32 1 9 15 43 43 18 26 32 6 43 15 9 1 18 26 6 32 15 43 1 9 18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1 18 26 32 6 43 15 9 1
  • 34. Merge-Sort (A, p, r) • INPUT: a sequence of n numbers stored in array A MergeSort (A, p, r) // sort A[p..r] by divide & conquer • if p < r 1 OUTPUT: an ordered sequence of n 2 numbers (p+r)/2 then q ← 3 MergeSort (A, p, q) 4 MergeSort (A, q+1, r) 5 Merge (A, p, q, r) // merges A[p..q] with A[q+1..r] Initial Call: MergeSort(A, 1, n)
  • 35. Procedure Merge • Merge(A, p, q, r) • 1 n1 ← q – p + 1 Input: Array containing • 2 n2 ← r – q sorted subarrays A[p..q]  for i ← 1 to n1 and A[q+1..r].  do L[i] ← A[p + i – 1] Output: Merged sorted  for j ← 1 to n2 subarray in A[p..r].  do R[j] ← A[q + j]  L[n1+1] ← ∞  R[n2+1] ← ∞  i←1  j←1 Sentinels, to avoid having to  for k ←p to r check if either subarray is  do if L[i] ≤ R[j] fully copied at each step.  then A[k] ← L[i]  i←i+1  else A[k] ← R[j]  j←j+1
  • 36. Merge – Example A … 6 1 8 26 32 26 32 42 43 6 8 9 1 9 … k k k k k k k k k L 6 8 26 32 ∞ R 1 9 42 43 ∞ i i i i i j j j j j
  • 37. Correctness of Merge • Merge(A, p, q, r) Loop Invariant for the for loop • 1 n1 ← q – p + 1 At the start of each iteration of the • 2 n2 ← r – q for loop:  for i ← 1 to n1 Subarray A[p..k – 1]  do L[i] ← A[p + i – 1] contains the k – p smallest elements  for j ← 1 to n2 of L and R in sorted order. L[i] and R[j] are the smallest elements of  do R[j] ← A[q + j] L and R that have not been copied back into  L[n1+1] ← ∞ A.  R[n2+1] ← ∞  i←1 Initialization:  j←1 Before the first iteration: •A[p..k – 1] is empty.  for k ←p to r •i = j = 1.  do if L[i] ≤ R[j] •L[1] and R[1] are the smallest  then A[k] ← L[i] elements of L and R not copied to A.  i←i+1  else A[k] ← R[j]
  • 38. Correctness of Merge • Merge(A, p, q, r) Maintenance: • 1 n1 ← q – p + 1 Case 1: L[i] ≤ R[j] •By LI, A contains p – k smallest elements • 2 n2 ← r – q of L and R in sorted order.  for i ← 1 to n1 •By LI, L[i] and R[j] are the smallest elements  do L[i] ← A[p + i – 1] of L and R not yet copied into A.  for j ← 1 to n2 •Line 13 results in A containing p – k + 1 smallest elements (again in sorted order).  do R[j] ← A[q + j] Incrementing i and k reestablishes the LI for  L[n1+1] ← ∞ the next iteration.  R[n2+1] ← ∞ Similarly for L[i] > R[j].  i←1 Termination:  j←1 •On termination, k = r + 1.  for k ←p to r •By LI, A contains r – p + 1 smallest  do if L[i] ≤ R[j] elements of L and R in sorted order.  then A[k] ← L[i] •L and R together contain r – p + 3 elements. All but the two sentinels have been copied  i←i+1 back into A.  else A[k] ← R[j] 
  • 39. Analysis of Merge Sort • Running time T(n) of Merge Sort: • Divide: computing the middle takes Θ(1) • Conquer: solving 2 subproblems takes 2T(n/2) • Combine: merging n elements takes Θ(n) • Total: T(n) = Θ(1) if n = 1 T(n) = 2T(n/2) + Θ(n) if n > 1 ⇒ T(n) = Θ(n lg n) (CLRS, Chapter 4)

Notas del editor

  1. 2