CS 440 Theory of Algorithms /
CS 468 Al ith i Bi i f tiCS 468 Algorithms in Bioinformatics
Divide-and-Conquer
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Divide and Conquer
The most well known algorithm design strategy:
1. Divide instance of problem into two or more smaller
instances
2. Solve smaller instances recursively
3. Obtain solution to original (larger) instance by combining
these solutions
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-1
Divide-and-conquer technique example
a problem of size n
b bl 2b bl 1
a problem of size n
subproblem 2
of size n/2
subproblem 1
of size n/2
a solution to
subproblem 1
a solution to
subproblem 2p p
a solution to
the original problem
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-2
e o g p ob e
Divide and Conquer Examples
Ɣ Sorting: mergesort and quicksort
Ɣ Tree traversals
Ɣ Binary search
Ɣ Multiplication of large integers
Ɣ Matrix multiplication: Strassen’s algorithm
Ɣ Closest-pair and convex-hull algorithms
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-3
General Divide and Conquer recurrence:
T(n) = aT(n/b) + f (n) where f (n) Ĭ(nd)T(n) aT(n/b) + f (n) where f (n) Ĭ(n )
Master Theorem
d d• a < bd T(n) Ĭ(nd)
• a = bd T(n) Ĭ(nd lg n )
• a > bd T(n) Ĭ(nlog b a)
Note: the same results hold with O instead of Ĭ.Note: the same results hold with O instead of Ĭ.
Examples: T(n) = 4T(n/3) + n Ÿ T(n)  ?Examples: T(n) 4T(n/3) + n Ÿ T(n)  ?
T(n) = 2T(n/2) + n2 Ÿ T(n)  ?
T(n) = 8T(n/2) + n3 Ÿ T(n)  ?
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-4
T(n) = 8T(n/2) + n Ÿ T(n)  ?
Mergesort
Ɣ Split array A[0..n-1] in two about equal halves and make
copies of each half in arrays B and Ccopies of each half in arrays B and C
Ɣ Sort arrays B and C recursively
Ɣ Merge sorted arrays B and C into array A as follows:Ɣ Merge sorted arrays B and C into array A as follows:
• Repeat the following until no elements remain in one of
the arrays:y
– compare the first elements in the remaining
unprocessed portions of the arrays
– copy the smaller of the two into A, while
incrementing the index indicating the unprocessed
portion of that arrayportion of that array
• Once all elements in one of the arrays are processed,
copy the remaining unprocessed elements from the other
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
array into A.
Pseudocode of Mergesort
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Pseudocode of Merge
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Mergesort Example
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Analysis of Mergesort
Ɣ All cases have same efficiency: Ĭ(n log n)
• According to Master Theorem (why?)
Number of comparisons in the worst case is close toƔ Number of comparisons in the worst case is close to
theoretical minimum for comparison-based sorting:
Cworst(n) = 2 Cworst(n/2) + n – 1, Cworst(1) = 0Cworst(n) 2 Cworst(n/2) n 1, Cworst(1) 0
Æ Cworst(n) = n log2 n – n + 1
Th ti l l b d ªl !º § ª l 1 44 ºTheoretical lower bound: ªlog2 n!º § ªn log2 n - 1.44nº
Ɣ Space requirement: Ĭ(n) (not in place)Ɣ Space requirement: Ĭ(n) (not in-place)
Ɣ Can be implemented without recursion (bottom-up)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Ɣ Can be implemented without recursion (bottom-up)
Quicksort
Ɣ Select a pivot (partitioning element) – here, the first element
Ɣ Rearrange the list so that all the elements in the first s
positions are smaller than or equal to the pivot and all the
i i i i ielements in the remaining n-s positions are larger than or
equal to the pivot (see next slide for an algorithm)
p
A[i]dp A[i]tp
Ɣ Exchange the pivot with the last element in the first (i.e., d)
subarray — the pivot is now in its final position
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Ɣ Sort the two subarrays recursively
Partitioning Algorithm
dd
The index i can
go out of thego out of the
subarray bound
and it needs to
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
be taken care of.
Quicksort Example
5 3 1 9 8 2 4 7
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Analysis of Quicksort
Ɣ Best case: split in the middle — Ĭ(n log n)
Worst case: sorted array! Ĭ(n2)Ɣ Worst case: sorted array! — Ĭ(n2)
Ɣ Average case: random arrays — Ĭ(n log n)
Ɣ Improvements:
b tt i t l ti di f th titi i• better pivot selection: median of three partitioning
• switch to insertion sort on small subfiles
li i i f i• elimination of recursion
These combine to 20-25% improvement
Ɣ Considered the method of choice for internal sorting of large
fil ( • 10000)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
files (n • 10000)
Binary Search
Very efficient algorithm for searching in sorted array:
K
vs
A[0] . . . A[m] . . . A[n-1]
If K = A[m], stop (successful search); otherwise, continue
searching by the same method in A[0..m-1] if K < A[m]
and in A[m+1 n 1] if K > A[m]and in A[m+1..n-1] if K > A[m]
l m 0; r m n-1
while l d r do
m m ¬(l+r)/2¼
if K = A[m] return mif K = A[m] return m
else if K < A[m] r m m-1
else l m m+1
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
return -1
Analysis of Binary Search
Ɣ Time efficiency
• worst-case recurrence: Cw (n) = 1 + Cw( ¬n/2¼ ), Cw (1) = 1w ( ) w( ¬ ¼ ), w ( )
solution: Cw(n) = ªlog2(n+1)º
This is VERY fast: e g C (106) = 20This is VERY fast: e.g., Cw(106) = 20
Ɣ Optimal for searching a sorted arrayp g y
Ɣ Limitations: must be a sorted array (not linked list)
Ɣ Degenerate example of divide-and-conquer
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Binary Tree Algorithms
Binary tree is a divide-and-conquer ready structure!
Ex. 1: Classic traversals (preorder, inorder, postorder)
Algorithm Inorder(T)
if T z ‡ a a‡
Inorder(Tleft)
i t( t f T)
a a
b c b c
d ' ' dprint(root of T)
Inorder(Tright)
d e ' ' d e
' ' ' '
Efficiency: Ĭ(n)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
y ( )
Binary Tree Algorithms (cont.)
Ex. 2: Computing the height of a binary tree
T TL R
h(T) = max{h(TL), h(TR)} + 1 if T z ‡ and h(‡) = -1
Efficiency: Ĭ(n)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Multiplication of Large Integers
Consider the problem of multiplying two (large) n-digit integers
represented by arrays of their digits such as:p y y g
A = 12345678901357986429 B = 87654321284820912836
The grade-school algorithm:
a1 a2 … an1 2 n
b1 b2 … bn
(d10) d11d12 … d1n
(d ) d d d(d20) d21d22 … d2n
… … … … … … …
(d ) d d d(dn0) dn1dn2 … dnn
Efficiency: n2 one digit multiplications
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Efficiency: n2 one-digit multiplications
First Divide-and-Conquer Algorithm
A small example: A 
 B where A = 2135 and B = 4014
A = (21 102 + 35) B = (40 102 + 14)A = (21·102 + 35), B = (40 ·102 + 14)
So, A 
 B = (21 ·102 + 35) 
 (40 ·102 + 14)
21 
 40 104 + (21 
 14 + 35 
 40) 102 + 35 
 14= 21 
 40 ·104 + (21 
 14 + 35 
 40) ·102 + 35 
 14
= 8569890
In general, if A = A1A2 and B = B1B2 (where A and B are n-digit,
A1, A2, B1, B2 are n/2-digit numbers),1 2 1 2 g
A 
 B = A1 
 B1·10n + (A1 
 B2 + A2 
 B1) ·10n/2 + A2 
 B2
R f h b f di i l i li i M( )Recurrence for the number of one-digit multiplications M(n):
M(n) = 4M(n/2), M(1) = 1
S l ti M( ) 2
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Solution: M(n) = n2
Second Divide-and-Conquer Algorithm
A 
 B = A1 
 B1·10n + (A1 
 B2 + A2 
 B1) ·10n/2 + A2 
 B2
The idea is to decrease the number of multiplications from 4 to 3:
(A1 + A2 ) 
 (B1 + B2 ) = A1 
 B1 + (A1 
 B2 + A2 
 B1) + A2 
 B2,
I.e., (A1 
 B2 + A2 
 B1) = (A1 + A2 ) 
 (B1 + B2 ) - A1 
 B1 - A2 
 B2,
which requires only 3 multiplications at the expense of (4-1) extra
add/sub.
Recurrence for the number of multiplications M(n):Recurrence for the number of multiplications M(n):
M(n) = 3M(n/2), M(1) = 1
Solution: M(n) = 3log 2n = nlog 23 § n1.585
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
So u o : ( ) 3
Example of Large-Integer Multiplication
2135 x 4014
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Strassen’s matrix multiplication
Ɣ Strassen observed [1969] that the product of two matrices
can be computed as follows:
C00 C01 A00 A01 B00 B01
= *
C C A A B BC10 C11 A10 A11 B10 B11
M + M M + M M + MM1 + M4 - M5 + M7 M3 + M5
=
M2 + M4 M1 + M3 - M2 + M62 4 1 3 2 6
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-22
Submatrices:
Ɣ M1 = (A00 + A11) * (B00 + B11)
Ɣ M2 = (A10 + A11) * B00
Ɣ M3 = A00 * (B01 - B11)
M = A * (B B )Ɣ M4 = A11 * (B10 - B00)
Ɣ M5 = (A00 + A01) * B11
Ɣ M6 = (A10 - A00) * (B00 + B01)
Ɣ M7 = (A01 - A11) * (B10 + B11)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-23
Efficiency of Strassen’s algorithm
Ɣ If n is not a power of 2, matrices can be padded with zeros
Ɣ Number of multiplications:
M(n) = , n > 1
M(1) = 1
Î M(n) = ?
Ɣ Number of additions:
A(n) = , n > 1
Î A( ) ?
Ɣ Algorithms with better asymptotic efficiency are known
A(n) , n 1
A(1) = 1
Î A(n) = ?
g y p y
but they are even more complex.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-24
Closest-Pair Problem by Divide-and-Conquer
Step 1 Divide the points given into two subsets S1 and S2 by a
vertical line x = c so that half the points lie to the left or onvertical line x c so that half the points lie to the left or on
the line and half the points lie to the right or on the line.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Closest Pair by Divide-and-Conquer (cont.)
Step 2 Find recursively the closest pairs for the left and right
s bsetssubsets.
Step 3 Set d = min{d1, d2}Step 3 Set d min{d1, d2}
We can limit our attention to the points in the symmetric
vertical strip of width 2d as possible closest pair. Let C1
and C be the subsets of points in the left subset S and ofand C2 be the subsets of points in the left subset S1 and of
the right subset S2, respectively, that lie in this vertical
strip. The points in C1 and C2 are stored in increasing
order of their y coordinates which is maintained byorder of their y coordinates, which is maintained by
merging during the execution of the next step.
Step 4 For every point P(x,y) in C1, we inspect points in
C2 that may be closer to P than d. There can be no more
than 6 such points (because d ” d2)!
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
than 6 such points (because d ” d2)!
Closest Pair by Divide-and-Conquer: Worst Case
The worst case scenario is depicted below:
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
Efficiency of the Closest-Pair Algorithm
Running time of the algorithm is described by
T(n) = 2T(n/2) + M(n), where M(n)  O(n)
By the Master Theorem (with a = 2, b = 2, d = 1)
T(n)  O(n log n)
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
QuickHull Algorithm
Inspired by Quicksort compute Convex Hull:
Ɣ Assume points are sorted by x coordinate valuesƔ Assume points are sorted by x-coordinate values
Ɣ Identify extreme points P1 and P2 (part of hull)
Compute upper hull:Ɣ Compute upper hull:
• find point Pmax that is farthest away from line P1P2
• compute the hull of the points to the left of line P1Pmaxcompute the hull of the points to the left of line P1Pmax
• compute the hull of the points to the left of line PmaxP2
Ɣ Compute lower hull Pma
p
in a similar manner
P2
Pmax
P1
Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-29
Efficiency of QuickHull algorithm
Ɣ Finding point farthest away from line P1P2 can be done in
li tilinear time
Ɣ Time efficiency:
Ĭ 2 i• worst case: Ĭ(n2) (as quicksort)
• average case: Ĭ(n logn) (under reasonable assumptions
b t di t ib ti f i t i )about distribution of points given)
Ɣ If points are not initially sorted by x-coordinate value, thisƔ If points are not initially sorted by x coordinate value, this
can be accomplished in Ĭ( n log n) — no increase in
asymptotic efficiency class
Ɣ Several O(n log n) algorithms for convex hull are known
• Graham’s scan
• DCHull
Copyright © 2007 Pearson Addison-Wesley. All rights reserved
• DCHull
Design and Analysis of Algorithms - Chapter 4 4-30

Divide and conquer

  • 1.
    CS 440 Theoryof Algorithms / CS 468 Al ith i Bi i f tiCS 468 Algorithms in Bioinformatics Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved Copyright © 2007 Pearson Addison-Wesley. All rights reserved. Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original (larger) instance by combining these solutions Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-1
  • 2.
    Divide-and-conquer technique example aproblem of size n b bl 2b bl 1 a problem of size n subproblem 2 of size n/2 subproblem 1 of size n/2 a solution to subproblem 1 a solution to subproblem 2p p a solution to the original problem Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-2 e o g p ob e Divide and Conquer Examples Ɣ Sorting: mergesort and quicksort Ɣ Tree traversals Ɣ Binary search Ɣ Multiplication of large integers Ɣ Matrix multiplication: Strassen’s algorithm Ɣ Closest-pair and convex-hull algorithms Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-3
  • 3.
    General Divide andConquer recurrence: T(n) = aT(n/b) + f (n) where f (n) Ĭ(nd)T(n) aT(n/b) + f (n) where f (n) Ĭ(n ) Master Theorem d d• a < bd T(n) Ĭ(nd) • a = bd T(n) Ĭ(nd lg n ) • a > bd T(n) Ĭ(nlog b a) Note: the same results hold with O instead of Ĭ.Note: the same results hold with O instead of Ĭ. Examples: T(n) = 4T(n/3) + n Ÿ T(n)  ?Examples: T(n) 4T(n/3) + n Ÿ T(n)  ? T(n) = 2T(n/2) + n2 Ÿ T(n)  ? T(n) = 8T(n/2) + n3 Ÿ T(n)  ? Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-4 T(n) = 8T(n/2) + n Ÿ T(n)  ? Mergesort Ɣ Split array A[0..n-1] in two about equal halves and make copies of each half in arrays B and Ccopies of each half in arrays B and C Ɣ Sort arrays B and C recursively Ɣ Merge sorted arrays B and C into array A as follows:Ɣ Merge sorted arrays B and C into array A as follows: • Repeat the following until no elements remain in one of the arrays:y – compare the first elements in the remaining unprocessed portions of the arrays – copy the smaller of the two into A, while incrementing the index indicating the unprocessed portion of that arrayportion of that array • Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other Copyright © 2007 Pearson Addison-Wesley. All rights reserved array into A.
  • 4.
    Pseudocode of Mergesort Copyright© 2007 Pearson Addison-Wesley. All rights reserved Pseudocode of Merge Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 5.
    Mergesort Example Copyright ©2007 Pearson Addison-Wesley. All rights reserved Analysis of Mergesort Ɣ All cases have same efficiency: Ĭ(n log n) • According to Master Theorem (why?) Number of comparisons in the worst case is close toƔ Number of comparisons in the worst case is close to theoretical minimum for comparison-based sorting: Cworst(n) = 2 Cworst(n/2) + n – 1, Cworst(1) = 0Cworst(n) 2 Cworst(n/2) n 1, Cworst(1) 0 Æ Cworst(n) = n log2 n – n + 1 Th ti l l b d ªl !º § ª l 1 44 ºTheoretical lower bound: ªlog2 n!º § ªn log2 n - 1.44nº Ɣ Space requirement: Ĭ(n) (not in place)Ɣ Space requirement: Ĭ(n) (not in-place) Ɣ Can be implemented without recursion (bottom-up) Copyright © 2007 Pearson Addison-Wesley. All rights reserved Ɣ Can be implemented without recursion (bottom-up)
  • 6.
    Quicksort Ɣ Select apivot (partitioning element) – here, the first element Ɣ Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot and all the i i i i ielements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm) p A[i]dp A[i]tp Ɣ Exchange the pivot with the last element in the first (i.e., d) subarray — the pivot is now in its final position Copyright © 2007 Pearson Addison-Wesley. All rights reserved Ɣ Sort the two subarrays recursively Partitioning Algorithm dd The index i can go out of thego out of the subarray bound and it needs to Copyright © 2007 Pearson Addison-Wesley. All rights reserved be taken care of.
  • 7.
    Quicksort Example 5 31 9 8 2 4 7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved Analysis of Quicksort Ɣ Best case: split in the middle — Ĭ(n log n) Worst case: sorted array! Ĭ(n2)Ɣ Worst case: sorted array! — Ĭ(n2) Ɣ Average case: random arrays — Ĭ(n log n) Ɣ Improvements: b tt i t l ti di f th titi i• better pivot selection: median of three partitioning • switch to insertion sort on small subfiles li i i f i• elimination of recursion These combine to 20-25% improvement Ɣ Considered the method of choice for internal sorting of large fil ( • 10000) Copyright © 2007 Pearson Addison-Wesley. All rights reserved files (n • 10000)
  • 8.
    Binary Search Very efficientalgorithm for searching in sorted array: K vs A[0] . . . A[m] . . . A[n-1] If K = A[m], stop (successful search); otherwise, continue searching by the same method in A[0..m-1] if K < A[m] and in A[m+1 n 1] if K > A[m]and in A[m+1..n-1] if K > A[m] l m 0; r m n-1 while l d r do m m ¬(l+r)/2¼ if K = A[m] return mif K = A[m] return m else if K < A[m] r m m-1 else l m m+1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved return -1 Analysis of Binary Search Ɣ Time efficiency • worst-case recurrence: Cw (n) = 1 + Cw( ¬n/2¼ ), Cw (1) = 1w ( ) w( ¬ ¼ ), w ( ) solution: Cw(n) = ªlog2(n+1)º This is VERY fast: e g C (106) = 20This is VERY fast: e.g., Cw(106) = 20 Ɣ Optimal for searching a sorted arrayp g y Ɣ Limitations: must be a sorted array (not linked list) Ɣ Degenerate example of divide-and-conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 9.
    Binary Tree Algorithms Binarytree is a divide-and-conquer ready structure! Ex. 1: Classic traversals (preorder, inorder, postorder) Algorithm Inorder(T) if T z ‡ a a‡ Inorder(Tleft) i t( t f T) a a b c b c d ' ' dprint(root of T) Inorder(Tright) d e ' ' d e ' ' ' ' Efficiency: Ĭ(n) Copyright © 2007 Pearson Addison-Wesley. All rights reserved y ( ) Binary Tree Algorithms (cont.) Ex. 2: Computing the height of a binary tree T TL R h(T) = max{h(TL), h(TR)} + 1 if T z ‡ and h(‡) = -1 Efficiency: Ĭ(n) Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 10.
    Multiplication of LargeIntegers Consider the problem of multiplying two (large) n-digit integers represented by arrays of their digits such as:p y y g A = 12345678901357986429 B = 87654321284820912836 The grade-school algorithm: a1 a2 … an1 2 n b1 b2 … bn (d10) d11d12 … d1n (d ) d d d(d20) d21d22 … d2n … … … … … … … (d ) d d d(dn0) dn1dn2 … dnn Efficiency: n2 one digit multiplications Copyright © 2007 Pearson Addison-Wesley. All rights reserved Efficiency: n2 one-digit multiplications First Divide-and-Conquer Algorithm A small example: A B where A = 2135 and B = 4014 A = (21 102 + 35) B = (40 102 + 14)A = (21·102 + 35), B = (40 ·102 + 14) So, A B = (21 ·102 + 35) (40 ·102 + 14) 21 40 104 + (21 14 + 35 40) 102 + 35 14= 21 40 ·104 + (21 14 + 35 40) ·102 + 35 14 = 8569890 In general, if A = A1A2 and B = B1B2 (where A and B are n-digit, A1, A2, B1, B2 are n/2-digit numbers),1 2 1 2 g A B = A1 B1·10n + (A1 B2 + A2 B1) ·10n/2 + A2 B2 R f h b f di i l i li i M( )Recurrence for the number of one-digit multiplications M(n): M(n) = 4M(n/2), M(1) = 1 S l ti M( ) 2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved Solution: M(n) = n2
  • 11.
    Second Divide-and-Conquer Algorithm A B = A1 B1·10n + (A1 B2 + A2 B1) ·10n/2 + A2 B2 The idea is to decrease the number of multiplications from 4 to 3: (A1 + A2 ) (B1 + B2 ) = A1 B1 + (A1 B2 + A2 B1) + A2 B2, I.e., (A1 B2 + A2 B1) = (A1 + A2 ) (B1 + B2 ) - A1 B1 - A2 B2, which requires only 3 multiplications at the expense of (4-1) extra add/sub. Recurrence for the number of multiplications M(n):Recurrence for the number of multiplications M(n): M(n) = 3M(n/2), M(1) = 1 Solution: M(n) = 3log 2n = nlog 23 § n1.585 Copyright © 2007 Pearson Addison-Wesley. All rights reserved So u o : ( ) 3 Example of Large-Integer Multiplication 2135 x 4014 Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 12.
    Strassen’s matrix multiplication ƔStrassen observed [1969] that the product of two matrices can be computed as follows: C00 C01 A00 A01 B00 B01 = * C C A A B BC10 C11 A10 A11 B10 B11 M + M M + M M + MM1 + M4 - M5 + M7 M3 + M5 = M2 + M4 M1 + M3 - M2 + M62 4 1 3 2 6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-22 Submatrices: Ɣ M1 = (A00 + A11) * (B00 + B11) Ɣ M2 = (A10 + A11) * B00 Ɣ M3 = A00 * (B01 - B11) M = A * (B B )Ɣ M4 = A11 * (B10 - B00) Ɣ M5 = (A00 + A01) * B11 Ɣ M6 = (A10 - A00) * (B00 + B01) Ɣ M7 = (A01 - A11) * (B10 + B11) Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-23
  • 13.
    Efficiency of Strassen’salgorithm Ɣ If n is not a power of 2, matrices can be padded with zeros Ɣ Number of multiplications: M(n) = , n > 1 M(1) = 1 Î M(n) = ? Ɣ Number of additions: A(n) = , n > 1 Î A( ) ? Ɣ Algorithms with better asymptotic efficiency are known A(n) , n 1 A(1) = 1 Î A(n) = ? g y p y but they are even more complex. Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-24 Closest-Pair Problem by Divide-and-Conquer Step 1 Divide the points given into two subsets S1 and S2 by a vertical line x = c so that half the points lie to the left or onvertical line x c so that half the points lie to the left or on the line and half the points lie to the right or on the line. Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 14.
    Closest Pair byDivide-and-Conquer (cont.) Step 2 Find recursively the closest pairs for the left and right s bsetssubsets. Step 3 Set d = min{d1, d2}Step 3 Set d min{d1, d2} We can limit our attention to the points in the symmetric vertical strip of width 2d as possible closest pair. Let C1 and C be the subsets of points in the left subset S and ofand C2 be the subsets of points in the left subset S1 and of the right subset S2, respectively, that lie in this vertical strip. The points in C1 and C2 are stored in increasing order of their y coordinates which is maintained byorder of their y coordinates, which is maintained by merging during the execution of the next step. Step 4 For every point P(x,y) in C1, we inspect points in C2 that may be closer to P than d. There can be no more than 6 such points (because d ” d2)! Copyright © 2007 Pearson Addison-Wesley. All rights reserved than 6 such points (because d ” d2)! Closest Pair by Divide-and-Conquer: Worst Case The worst case scenario is depicted below: Copyright © 2007 Pearson Addison-Wesley. All rights reserved
  • 15.
    Efficiency of theClosest-Pair Algorithm Running time of the algorithm is described by T(n) = 2T(n/2) + M(n), where M(n)  O(n) By the Master Theorem (with a = 2, b = 2, d = 1) T(n)  O(n log n) Copyright © 2007 Pearson Addison-Wesley. All rights reserved QuickHull Algorithm Inspired by Quicksort compute Convex Hull: Ɣ Assume points are sorted by x coordinate valuesƔ Assume points are sorted by x-coordinate values Ɣ Identify extreme points P1 and P2 (part of hull) Compute upper hull:Ɣ Compute upper hull: • find point Pmax that is farthest away from line P1P2 • compute the hull of the points to the left of line P1Pmaxcompute the hull of the points to the left of line P1Pmax • compute the hull of the points to the left of line PmaxP2 Ɣ Compute lower hull Pma p in a similar manner P2 Pmax P1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved Design and Analysis of Algorithms - Chapter 4 4-29
  • 16.
    Efficiency of QuickHullalgorithm Ɣ Finding point farthest away from line P1P2 can be done in li tilinear time Ɣ Time efficiency: Ĭ 2 i• worst case: Ĭ(n2) (as quicksort) • average case: Ĭ(n logn) (under reasonable assumptions b t di t ib ti f i t i )about distribution of points given) Ɣ If points are not initially sorted by x-coordinate value, thisƔ If points are not initially sorted by x coordinate value, this can be accomplished in Ĭ( n log n) — no increase in asymptotic efficiency class Ɣ Several O(n log n) algorithms for convex hull are known • Graham’s scan • DCHull Copyright © 2007 Pearson Addison-Wesley. All rights reserved • DCHull Design and Analysis of Algorithms - Chapter 4 4-30