Unit III-0
UNIT – III
DYNAMIC PROGRAMMING
Unit III-1
Introduction
 Dynamic programming is a technique for
solving problems with overlapping sub-
problems.
 Typically, these sub-problems arise from a
recurrence relating a solution to a given
problem with solutions to its smaller sub-
problems of the same type.
Unit III-2
Introduction
 Rather than solving overlapping sub-problems
again and again,
 dynamic programming suggests solving each of the
smaller sub-problems only once
 and recording the results in a table from which we
can then obtain a solution to the original problem.
Unit III-3
Dynamic Programming
Dynamic Programming is a general algorithm design technique
for solving problems defined by or formulated as recurrences
with overlapping subinstances
• Invented by American mathematician Richard Bellman in the
1950s to solve optimization problems
• “Programming” here means “planning”
• Main idea:
- set up a recurrence relating a solution to a larger instance
to solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Unit III-4
Example: Fibonacci numbers
• Recall definition of Fibonacci numbers:
F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1
• Computing the nth Fibonacci number recursively (top-down):
F(n)
F(n-1) + F(n-2)
F(n-2) + F(n-3) F(n-3) + F(n-4)
...
Unit III-5
Example: Fibonacci numbers (cont.)
Computing the nth Fibonacci number using bottom-up iteration and
recording results:
F(0) = 0
F(1) = 1
F(2) = 1+0 = 1
…
F(n-2) =
F(n-1) =
F(n) = F(n-1) + F(n-2)
Efficiency:
- time
- space
0 1 1 . . . F(n-2) F(n-1) F(n)
n
n
What if we solve
it recursively?
Unit III-6
Introduction
 The Fibonacci numbers are the elements of the sequence
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . ,
Algorithm fib(n)
if n = 0 or n = 1 return 1
return fib(n − 1) + fib(n − 2)
 The original problem F(n) is defined by F(n-1) and F(n-2)
Unit III-7
Introduction
 Notice that if we call, say, fib(5), we produce a call tree that
calls the function on the same value many different times:
 fib(5)
 fib(4) + fib(3)
 (fib(3) + fib(2)) + (fib(2) + fib(1))
 ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) +
fib(1))
 (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) +
fib(0)) + fib(1))
 If we try to use recurrence directly to compute the nth
Fibonacci number F(n) , we would have to recompute the
same values of this function many times
Unit III-8
Introduction
 Certain algorithms compute the nth Fibonacci number
without computing all the preceding elements of this
sequence.
 It is typical of an algorithm based on the classic bottom-up
dynamic programming approach,
 A top-down variation of it exploits so-called memory
functions
 The crucial step in designing such an algorithm remains the
same => Deriving a recurrence relating a solution to the
problem’s instance with solutions of its smaller (and
overlapping) subinstances.
Unit III-9
Introduction
 Dynamic programming usually takes one of two
approaches:
 Bottom-up approach: All subproblems that might be
needed are solved in advance and then used to build up
solutions to larger problems. This approach is slightly
better in stack space and number of function calls, but it is
sometimes not intuitive to figure out all the subproblems
needed for solving the given problem.
 Top-down approach: The problem is broken into
subproblems, and these subproblems are solved and the
solutions remembered, in case they need to be solved again.
This is recursion and Memory Function combined together.
Unit III-10
Bottom Up
 In the bottom-up approach we calculate the smaller values
of Fibo first, then build larger values from them. This
method also uses linear (O(n)) time since it contains a loop
that repeats n − 1 times.
 In both these examples, we only calculate fib(2) one time,
and then use it to calculate both fib(4) and fib(3), instead of
computing it every time either of them is evaluated.
Algorithm Fibo(n)
a = 0, b = 1
repeat n − 1 times
c = a + b
a = b
b = c
return b
Unit III-11
Top-Down
 suppose we have a simple map object, m, which maps each
value of Fibo that has already been calculated to its result,
and we modify our function to use it and update it. The
resulting function requires only O(n) time instead of
exponential time:
 This technique of saving values that have already been
calculated is called Memory Function; this is the top-down
approach, since we first break the problem into
subproblems and then calculate and store values
m [0] = 0
m [1] = 1
Algorithm Fibo(n)
if map m does not contain key n
m[n] := Fibo(n − 1) + Fibo(n − 2)
return m[n]
Unit III-12
Examples of DP algorithms
• Computing a binomial coefficient
• Longest common subsequence
• Warshall’s algorithm for transitive closure
• Floyd’s algorithm for all-pairs shortest paths
• Constructing an optimal binary search tree
• Some instances of difficult discrete optimization problems:
- traveling salesman
- knapsack
Unit III-13
Computing a binomial coefficient by DP
Binomial coefficients are coefficients of the binomial formula:
(a + b)n = C(n,0)anb0 + . . . + C(n,k)an-kbk + . . . + C(n,n)a0bn
Recurrence: C(n,k) = C(n-1,k) + C(n-1,k-1) for n > k > 0
C(n,0) = 1, C(n,n) = 1 for n  0
Value of C(n,k) can be computed by filling a table:
0 1 2 . . . k-1 k
0 1
1 1 1
.
.
.
n-1 C(n-1,k-1) C(n-1,k)
n C(n,k)
Unit III-14
Computing C(n,k): pseudocode and analysis
Time efficiency: Θ(nk)
Space efficiency: Θ(nk)
Unit III-15
Warshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:
3
4
2
1
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
3
4
2
1
Unit III-16
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence
of n-by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the
first k vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3
4
2
1
3
4
2
1
3
4
2
1
3
4
2
1
R(0)
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(1)
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(2)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(4)
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
3
4
2
1
Unit III-17
Warshall’s Algorithm (recurrence)
On the k-th iteration, the algorithm determines for every pair of
vertices i, j if a path exists from i and j with just vertices 1,…,k
allowed as intermediate
R(k-1)[i,j] (path using just 1 ,…,k-1)
R(k)[i,j] = or
R(k-1)[i,k] and R(k-1)[k,j] (path from i to k
and from k to j
using just 1 ,…,k-1)
i
j
k
{
Initial condition?
Unit III-18
Warshall’s Algorithm (matrix generation)
Recurrence relating elements R(k) to elements of R(k-1) is:
R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])
It implies the following rules for generating R(k) from R(k-1):
Rule 1 If an element in row i and column j is 1 in R(k-1),
it remains 1 in R(k)
Rule 2 If an element in row i and column j is 0 in R(k-1),
it has to be changed to 1 in R(k) if and only if
the element in its row i and column k and the element
in its column j and row k are both 1’s in R(k-1)
Unit III-19
Warshall’s Algorithm (example)
3
4
2
1 0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(0) =
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(1) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(2) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3) =
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
R(4) =
Unit III-20
Warshall’s Algorithm (pseudocode and analysis)
Time efficiency: Θ(n3)
Space efficiency: Matrices can be written over their predecessors
(with some care), so it’s Θ(n^2).
Unit III-21
Floyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices
Same idea: construct solution through series of matrices D(0), …,
D (n) using increasing subsets of the vertices allowed
as intermediate
Example: 3
4
2
1
4
1
6
1
5
3
0 ∞ 4 ∞
1 0 4 3
∞ ∞ 0 ∞
6 5 1 0
Unit III-22
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among
1,…,k as intermediate
D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}
i
j
k
D(k-1)[i,j]
D(k-1)[i,k]
D(k-1)[k,j]
Initial condition?
Unit III-23
Floyd’s Algorithm (example)
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
0 ∞ 3 ∞
2 0 5 ∞
∞ 7 0 1
6 ∞ 9 0
D(1) =
0 ∞ 3 ∞
2 0 5 ∞
9 7 0 1
6 ∞ 9 0
D(2) =
0 10 3 4
2 0 5 6
9 7 0 1
6 16 9 0
D(3) =
0 10 3 4
2 0 5 6
7 7 0 1
6 16 9 0
D(4) =
3
1
3
2
6 7
4
1 2
Unit III-24
Floyd’s Algorithm (pseudocode and analysis)
Time efficiency: Θ(n3)
Space efficiency: Matrices can be written over their predecessors
Note: Works on graphs with negative edges but without negative cycles.
Shortest paths themselves can be found, too. How?
If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k
Since the superscripts k or k-1 make
no difference to D[i,k] and D[k,j].
Unit III-25
Optimal Binary Search Trees
Problem: Given n keys a1 < …< an and probabilities p1, …, pn
searching for them, find a BST with a minimum
average number of comparisons in successful search.
Since total number of BSTs with n nodes is given by
C(2n,n)/(n+1), which grows exponentially, brute force is hopeless.
Example: What is an optimal BST for keys A, B, C, and D with
search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?
D
A
B
C
Average # of comparisons
= 1*0.4 + 2*(0.2+0.3) + 3*0.1
= 1.7
Unit III-26
DP for Optimal BST Problem
Let C[i,j] be minimum average number of comparisons made in
T[i,j], optimal BST for keys ai < …< aj , where 1 ≤ i ≤ j ≤ n.
Consider optimal BST among all BSTs with some ak (i ≤ k ≤ j )
as their root; T[i,j] is the best among them.
a
Optimal
BST for
a , ..., a
Optimal
BST for
a , ..., a
i
k
k-1 k+1 j
C[i,j] =
min {pk · 1 +
∑ ps (level as in T[i,k-1] +1) +
∑ ps (level as in T[k+1,j] +1)}
i ≤ k ≤ j
s = i
k-1
s =k+1
j
Unit III-27
goal
0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
n
p
i
j
DP for Optimal BST Problem (cont.)
After simplifications, we obtain the recurrence for C[i,j]:
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n
C[i,i] = pi for 1 ≤ i ≤ j ≤ n
s = i
j
i ≤ k ≤ j
Example: key A B C D
probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filled
using the recurrence
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi ;
the right one, for trees’ roots, records k’s values giving the minima
0 1 2 3 4
1 0 .1 .4 1.1 1.7
2 0 .2 .8 1.4
3 0 .4 1.0
4 0 .3
5 0
0 1 2 3 4
1 1 2 3 3
2 2 3 3
3 3 3
4 4
5
i ≤ k ≤ j s = i
j
optimal BST
B
A
C
D
i
j
i
j
Unit III-29
Optimal Binary Search Trees
Unit III-30
Analysis DP for Optimal BST Problem
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)
Method can be expanded to include unsuccessful searches
Unit III-31
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack
Consider instance defined by first i items and capacity j (j  W).
Let V[i,j] be optimal value of such an instance. Then
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0
V[i,j] =
V[i-1,j] if j- wi < 0
Initial conditions: V[0,j] = 0 and V[i,0] = 0
{
Unit III-32
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0
w1 = 2, v1= 12 1
w2 = 1, v2= 10 2
w3 = 3, v3= 20 3
w4 = 2, v4= 15 4 ?
0 0 0
0 0 12
0 10 12 22 22 22
0 10 12 22 30 32
0 10 15 25 30 37
Backtracing
finds the actual
optimal subset,
i.e. solution.
Unit III-33
Example – Dynamic Programming Table
capacity W = 5
Unit III-34
Example
 Thus, the maximal value is V [4, 5]= $37. We can find the
composition of an optimal subset by tracing back the
computations of this entry in the table.
 Since V [4, 5] is not equal to V [3, 5], item 4 was included in an
optimal solution along with an optimal subset for filling 5 - 2 = 3
remaining units of the knapsack capacity.
capacity W = 5
Unit III-35
Example
 The remaining is V[3,3]
 Here V[3,3] = V[2,3] so item 3 is not included
 V[2,3]  V[1,3] so item 2 is included
capacity W = 5
Unit III-36
Example
 The remaining is V[1,2]
 V[1,2]  V[0,2] so item 1 is included
 The solution is {item 1, item 2, item 4}
 Total weight is 5
 Total value is 37
capacity W = 5
Unit III-37
The Knapsack Problem
 The time efficiency and space efficiency of this algorithm
are both in θ(nW).
 The time needed to find the composition of an optimal
solution is in O(n + W).
Unit III-38
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do
V[i,0] := 0
for i := 1 to n do
for j := 1 to W do
if w[i]  j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
Running time and space:
O(nW).
Unit III-39
Memory Function
 The classic dynamic programming approach, fills a
table with solutions to all smaller subproblems but
each of them is solved only once.
 An unsatisfying aspect of this approach is that
solutions to some of these smaller subproblems are
often not necessary for getting a solution to the
problem given.
Unit III-40
Memory Function
 Since this drawback is not present in the top-down
approach, it is natural to try to combine the
strengths of the top-down and bottom-up
approaches.
 The goal is to get a method that solves only
subproblems that are necessary and does it only
once. Such a method exists; it is based on using
memory functions
Unit III-41
Memory Function
 Initially, all the table’s entries are initialized with a
special “null” symbol to indicate that they have not
yet been calculated.
 Thereafter, whenever a new value needs to be
calculated, the method checks the corresponding
entry in the table first: if this entry is not “null,” it
is simply retrieved from the table;
 otherwise, it is computed by the recursive call
whose result is then recorded in the table.
Unit III-42
Memory Function for solving Knapsack Problem
Unit III-43
Memory Function for solving Knapsack Problem
Unit III-44
Memory Function
 In general, we cannot expect more than a constant-factor
gain in using the memory function method for the knapsack
problem because its time efficiency class is the same as that
of the bottom-up algorithm
 A memory function method may be less space-efficient than
a space efficient version of a bottom-up algorithm.
Unit III-45
Conclusion
 Dynamic programming is a useful technique of solving
certain kind of problems
 When the solution can be recursively described in
terms of partial solutions, we can store these partial
solutions and re-use them as necessary

d0a2de03-27d3-4ca2-9ac6-d83440657a6c.ppt

  • 1.
    Unit III-0 UNIT –III DYNAMIC PROGRAMMING
  • 2.
    Unit III-1 Introduction  Dynamicprogramming is a technique for solving problems with overlapping sub- problems.  Typically, these sub-problems arise from a recurrence relating a solution to a given problem with solutions to its smaller sub- problems of the same type.
  • 3.
    Unit III-2 Introduction  Ratherthan solving overlapping sub-problems again and again,  dynamic programming suggests solving each of the smaller sub-problems only once  and recording the results in a table from which we can then obtain a solution to the original problem.
  • 4.
    Unit III-3 Dynamic Programming DynamicProgramming is a general algorithm design technique for solving problems defined by or formulated as recurrences with overlapping subinstances • Invented by American mathematician Richard Bellman in the 1950s to solve optimization problems • “Programming” here means “planning” • Main idea: - set up a recurrence relating a solution to a larger instance to solutions of some smaller instances - solve smaller instances once - record solutions in a table - extract solution to the initial instance from that table
  • 5.
    Unit III-4 Example: Fibonaccinumbers • Recall definition of Fibonacci numbers: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 • Computing the nth Fibonacci number recursively (top-down): F(n) F(n-1) + F(n-2) F(n-2) + F(n-3) F(n-3) + F(n-4) ...
  • 6.
    Unit III-5 Example: Fibonaccinumbers (cont.) Computing the nth Fibonacci number using bottom-up iteration and recording results: F(0) = 0 F(1) = 1 F(2) = 1+0 = 1 … F(n-2) = F(n-1) = F(n) = F(n-1) + F(n-2) Efficiency: - time - space 0 1 1 . . . F(n-2) F(n-1) F(n) n n What if we solve it recursively?
  • 7.
    Unit III-6 Introduction  TheFibonacci numbers are the elements of the sequence 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . , Algorithm fib(n) if n = 0 or n = 1 return 1 return fib(n − 1) + fib(n − 2)  The original problem F(n) is defined by F(n-1) and F(n-2)
  • 8.
    Unit III-7 Introduction  Noticethat if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:  fib(5)  fib(4) + fib(3)  (fib(3) + fib(2)) + (fib(2) + fib(1))  ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))  (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))  If we try to use recurrence directly to compute the nth Fibonacci number F(n) , we would have to recompute the same values of this function many times
  • 9.
    Unit III-8 Introduction  Certainalgorithms compute the nth Fibonacci number without computing all the preceding elements of this sequence.  It is typical of an algorithm based on the classic bottom-up dynamic programming approach,  A top-down variation of it exploits so-called memory functions  The crucial step in designing such an algorithm remains the same => Deriving a recurrence relating a solution to the problem’s instance with solutions of its smaller (and overlapping) subinstances.
  • 10.
    Unit III-9 Introduction  Dynamicprogramming usually takes one of two approaches:  Bottom-up approach: All subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. This approach is slightly better in stack space and number of function calls, but it is sometimes not intuitive to figure out all the subproblems needed for solving the given problem.  Top-down approach: The problem is broken into subproblems, and these subproblems are solved and the solutions remembered, in case they need to be solved again. This is recursion and Memory Function combined together.
  • 11.
    Unit III-10 Bottom Up In the bottom-up approach we calculate the smaller values of Fibo first, then build larger values from them. This method also uses linear (O(n)) time since it contains a loop that repeats n − 1 times.  In both these examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated. Algorithm Fibo(n) a = 0, b = 1 repeat n − 1 times c = a + b a = b b = c return b
  • 12.
    Unit III-11 Top-Down  supposewe have a simple map object, m, which maps each value of Fibo that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time:  This technique of saving values that have already been calculated is called Memory Function; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values m [0] = 0 m [1] = 1 Algorithm Fibo(n) if map m does not contain key n m[n] := Fibo(n − 1) + Fibo(n − 2) return m[n]
  • 13.
    Unit III-12 Examples ofDP algorithms • Computing a binomial coefficient • Longest common subsequence • Warshall’s algorithm for transitive closure • Floyd’s algorithm for all-pairs shortest paths • Constructing an optimal binary search tree • Some instances of difficult discrete optimization problems: - traveling salesman - knapsack
  • 14.
    Unit III-13 Computing abinomial coefficient by DP Binomial coefficients are coefficients of the binomial formula: (a + b)n = C(n,0)anb0 + . . . + C(n,k)an-kbk + . . . + C(n,n)a0bn Recurrence: C(n,k) = C(n-1,k) + C(n-1,k-1) for n > k > 0 C(n,0) = 1, C(n,n) = 1 for n  0 Value of C(n,k) can be computed by filling a table: 0 1 2 . . . k-1 k 0 1 1 1 1 . . . n-1 C(n-1,k-1) C(n-1,k) n C(n,k)
  • 15.
    Unit III-14 Computing C(n,k):pseudocode and analysis Time efficiency: Θ(nk) Space efficiency: Θ(nk)
  • 16.
    Unit III-15 Warshall’s Algorithm:Transitive Closure • Computes the transitive closure of a relation • Alternatively: existence of all nontrivial paths in a digraph • Example of transitive closure: 3 4 2 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 3 4 2 1
  • 17.
    Unit III-16 Warshall’s Algorithm Constructstransitive closure T as the last matrix in the sequence of n-by-n matrices R(0), … , R(k), … , R(n) where R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the first k vertices allowed as intermediate Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure) 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 R(0) 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(1) 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R(2) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(3) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(4) 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 3 4 2 1
  • 18.
    Unit III-17 Warshall’s Algorithm(recurrence) On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists from i and j with just vertices 1,…,k allowed as intermediate R(k-1)[i,j] (path using just 1 ,…,k-1) R(k)[i,j] = or R(k-1)[i,k] and R(k-1)[k,j] (path from i to k and from k to j using just 1 ,…,k-1) i j k { Initial condition?
  • 19.
    Unit III-18 Warshall’s Algorithm(matrix generation) Recurrence relating elements R(k) to elements of R(k-1) is: R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j]) It implies the following rules for generating R(k) from R(k-1): Rule 1 If an element in row i and column j is 1 in R(k-1), it remains 1 in R(k) Rule 2 If an element in row i and column j is 0 in R(k-1), it has to be changed to 1 in R(k) if and only if the element in its row i and column k and the element in its column j and row k are both 1’s in R(k-1)
  • 20.
    Unit III-19 Warshall’s Algorithm(example) 3 4 2 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(0) = 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R(1) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(2) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(3) = 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 R(4) =
  • 21.
    Unit III-20 Warshall’s Algorithm(pseudocode and analysis) Time efficiency: Θ(n3) Space efficiency: Matrices can be written over their predecessors (with some care), so it’s Θ(n^2).
  • 22.
    Unit III-21 Floyd’s Algorithm:All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths between every pair of vertices Same idea: construct solution through series of matrices D(0), …, D (n) using increasing subsets of the vertices allowed as intermediate Example: 3 4 2 1 4 1 6 1 5 3 0 ∞ 4 ∞ 1 0 4 3 ∞ ∞ 0 ∞ 6 5 1 0
  • 23.
    Unit III-22 Floyd’s Algorithm(matrix generation) On the k-th iteration, the algorithm determines shortest paths between every pair of vertices i, j that use only vertices among 1,…,k as intermediate D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]} i j k D(k-1)[i,j] D(k-1)[i,k] D(k-1)[k,j] Initial condition?
  • 24.
    Unit III-23 Floyd’s Algorithm(example) 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ 7 0 1 6 ∞ ∞ 0 D(0) = 0 ∞ 3 ∞ 2 0 5 ∞ ∞ 7 0 1 6 ∞ 9 0 D(1) = 0 ∞ 3 ∞ 2 0 5 ∞ 9 7 0 1 6 ∞ 9 0 D(2) = 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 D(3) = 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 D(4) = 3 1 3 2 6 7 4 1 2
  • 25.
    Unit III-24 Floyd’s Algorithm(pseudocode and analysis) Time efficiency: Θ(n3) Space efficiency: Matrices can be written over their predecessors Note: Works on graphs with negative edges but without negative cycles. Shortest paths themselves can be found, too. How? If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k Since the superscripts k or k-1 make no difference to D[i,k] and D[k,j].
  • 26.
    Unit III-25 Optimal BinarySearch Trees Problem: Given n keys a1 < …< an and probabilities p1, …, pn searching for them, find a BST with a minimum average number of comparisons in successful search. Since total number of BSTs with n nodes is given by C(2n,n)/(n+1), which grows exponentially, brute force is hopeless. Example: What is an optimal BST for keys A, B, C, and D with search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? D A B C Average # of comparisons = 1*0.4 + 2*(0.2+0.3) + 3*0.1 = 1.7
  • 27.
    Unit III-26 DP forOptimal BST Problem Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal BST for keys ai < …< aj , where 1 ≤ i ≤ j ≤ n. Consider optimal BST among all BSTs with some ak (i ≤ k ≤ j ) as their root; T[i,j] is the best among them. a Optimal BST for a , ..., a Optimal BST for a , ..., a i k k-1 k+1 j C[i,j] = min {pk · 1 + ∑ ps (level as in T[i,k-1] +1) + ∑ ps (level as in T[k+1,j] +1)} i ≤ k ≤ j s = i k-1 s =k+1 j
  • 28.
    Unit III-27 goal 0 0 C[i,j] 0 1 n+1 0 1n p 1 p2 n p i j DP for Optimal BST Problem (cont.) After simplifications, we obtain the recurrence for C[i,j]: C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n C[i,i] = pi for 1 ≤ i ≤ j ≤ n s = i j i ≤ k ≤ j
  • 29.
    Example: key AB C D probability 0.1 0.2 0.4 0.3 The tables below are filled diagonal by diagonal: the left one is filled using the recurrence C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi ; the right one, for trees’ roots, records k’s values giving the minima 0 1 2 3 4 1 0 .1 .4 1.1 1.7 2 0 .2 .8 1.4 3 0 .4 1.0 4 0 .3 5 0 0 1 2 3 4 1 1 2 3 3 2 2 3 3 3 3 3 4 4 5 i ≤ k ≤ j s = i j optimal BST B A C D i j i j
  • 30.
  • 31.
    Unit III-30 Analysis DPfor Optimal BST Problem Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking advantage of monotonicity of entries in the root table, i.e., R[i,j] is always in the range between R[i,j-1] and R[i+1,j] Space efficiency: Θ(n2) Method can be expanded to include unsuccessful searches
  • 32.
    Unit III-31 Knapsack Problemby DP Given n items of integer weights: w1 w2 … wn values: v1 v2 … vn a knapsack of integer capacity W find most valuable subset of the items that fit into the knapsack Consider instance defined by first i items and capacity j (j  W). Let V[i,j] be optimal value of such an instance. Then max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi  0 V[i,j] = V[i-1,j] if j- wi < 0 Initial conditions: V[0,j] = 0 and V[i,0] = 0 {
  • 33.
    Unit III-32 Knapsack Problemby DP (example) Example: Knapsack of capacity W = 5 item weight value 1 2 $12 2 1 $10 3 3 $20 4 2 $15 capacity j 0 1 2 3 4 5 0 w1 = 2, v1= 12 1 w2 = 1, v2= 10 2 w3 = 3, v3= 20 3 w4 = 2, v4= 15 4 ? 0 0 0 0 0 12 0 10 12 22 22 22 0 10 12 22 30 32 0 10 15 25 30 37 Backtracing finds the actual optimal subset, i.e. solution.
  • 34.
    Unit III-33 Example –Dynamic Programming Table capacity W = 5
  • 35.
    Unit III-34 Example  Thus,the maximal value is V [4, 5]= $37. We can find the composition of an optimal subset by tracing back the computations of this entry in the table.  Since V [4, 5] is not equal to V [3, 5], item 4 was included in an optimal solution along with an optimal subset for filling 5 - 2 = 3 remaining units of the knapsack capacity. capacity W = 5
  • 36.
    Unit III-35 Example  Theremaining is V[3,3]  Here V[3,3] = V[2,3] so item 3 is not included  V[2,3]  V[1,3] so item 2 is included capacity W = 5
  • 37.
    Unit III-36 Example  Theremaining is V[1,2]  V[1,2]  V[0,2] so item 1 is included  The solution is {item 1, item 2, item 4}  Total weight is 5  Total value is 37 capacity W = 5
  • 38.
    Unit III-37 The KnapsackProblem  The time efficiency and space efficiency of this algorithm are both in θ(nW).  The time needed to find the composition of an optimal solution is in O(n + W).
  • 39.
    Unit III-38 Knapsack Problemby DP (pseudocode) Algorithm DPKnapsack(w[1..n], v[1..n], W) var V[0..n,0..W], P[1..n,1..W]: int for j := 0 to W do V[0,j] := 0 for i := 0 to n do V[i,0] := 0 for i := 1 to n do for j := 1 to W do if w[i]  j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-w[i] else V[i,j] := V[i-1,j]; P[i,j] := j return V[n,W] and the optimal subset by backtracing Running time and space: O(nW).
  • 40.
    Unit III-39 Memory Function The classic dynamic programming approach, fills a table with solutions to all smaller subproblems but each of them is solved only once.  An unsatisfying aspect of this approach is that solutions to some of these smaller subproblems are often not necessary for getting a solution to the problem given.
  • 41.
    Unit III-40 Memory Function Since this drawback is not present in the top-down approach, it is natural to try to combine the strengths of the top-down and bottom-up approaches.  The goal is to get a method that solves only subproblems that are necessary and does it only once. Such a method exists; it is based on using memory functions
  • 42.
    Unit III-41 Memory Function Initially, all the table’s entries are initialized with a special “null” symbol to indicate that they have not yet been calculated.  Thereafter, whenever a new value needs to be calculated, the method checks the corresponding entry in the table first: if this entry is not “null,” it is simply retrieved from the table;  otherwise, it is computed by the recursive call whose result is then recorded in the table.
  • 43.
    Unit III-42 Memory Functionfor solving Knapsack Problem
  • 44.
    Unit III-43 Memory Functionfor solving Knapsack Problem
  • 45.
    Unit III-44 Memory Function In general, we cannot expect more than a constant-factor gain in using the memory function method for the knapsack problem because its time efficiency class is the same as that of the bottom-up algorithm  A memory function method may be less space-efficient than a space efficient version of a bottom-up algorithm.
  • 46.
    Unit III-45 Conclusion  Dynamicprogramming is a useful technique of solving certain kind of problems  When the solution can be recursively described in terms of partial solutions, we can store these partial solutions and re-use them as necessary