Jan 282013
 

When I taught Computer Science at university, dynamic programming was a topic that I thought was particularly important to cover in the Algorithms courses. The idea was to instill in the students the understanding that memory (space) can be exchanged for speed (at very favorable exchange rates, in some computational situations). When you have a problem that it can be applied to, dynamic programming is an amazingly powerful technique, capable of reducing the runtime needed to find a solution from absurd exponential runtime (O(2n)) down to polynomial runtime (O(n2)).

Let that example runtime comparison sink in for a moment. If you need help understanding how huge this is, compare it for the case where n=1000. In this case, our absurd exponential runtime from a non-dynamic programming algorithm would be (on the order of) 21000 = 1.0715 ×10301, while our improved polynomial runtime from a dynamic programming solution would be (on the order of) 10002 = 106. The dynamic programming algorithm would therefore take phenomenally less time to generate a solution. And the improvement only gets better as the problem you are addressing gets bigger. Where it works, dynamic programming pays off in a big way.

So, how does dynamic programming work? At it's most fundamental, it focuses on problems which can be defined recursively, but which at different points in program execution, calculate the same piece of information again and again and again and again... This happens because for one reason or another, the algorithm must call a function multiple times with the same parameters. In a dynamic programming solution however, these repeated calculations are instead performed only once, and their solution is stored within an array or hash table. Then, when the solution to those calculations is required, it is merely looked up. Instead of recalculating a repeated value at potentially great cost, a cheap lookup operation is performed instead.

Saving solutions while the program is executing and keeping them until they are needed later takes memory space. And this is where the tradeoff between space and time comes from. By sacrificing space to save answers you need in the future, time is not needed to recalculate them every time they are needed.

The Fibonacci sequence is a classic example that demonstrates the power of dynamic programming. For this discussion, we will ignore the fact that a closed-form solution exists, and instead focus on its recursive definition. The nth Fibonacci number is defined as F(n) = F(n−1)+ F(n−2) for all n > 1, and F(0)=F(1)=1. So now, we write a simple program that encapsulates this formula:

F( n)
    if( n== 0|| n== 1)
        return 1
    endif
    return F( n- 1)+ F( n- 2)
end

We can see that if a simple program was written recursively following that definition, many calculations will be repeated. For instance, if we wanted to calculate F( 1000), then F( 999) is calculated once, F( 998) twice, F( 997) three times, and so forth following the Fibonacci sequence until F( 2) is calculated roughly 2.7 ×10208 times. This is a LOT of repeat calculations! We actually need to go through that little program a grand total of 7 ×10208 times to calculate F( 1000).

However, we can take that little program's repetitive nature into account and create a dynamic programming algorithm to take its place. For this, we create an array and store Fibonacci numbers we have already calculated before moving on. The dynamic programming algorithm is then:

Fdp( n)
    fib= array( n);
    fib( 0)= 1;
    fib( 1)= 1;

    for ii= 2 to n
        fib( ii)= fib( ii- 1)+ fib( ii- 2);
    end

    return fib( n);
end

If we were to walk through the dynamic programming algorithm to calculate Fdp( 1000), we can see that we calculate Fdp( 0) once, Fdp( 1) once, Fdp( 2) once, and so forth until Fdp( 1000) is calculated...just once. The time savings are enormous! (If you don't believe me, code it up and give it a spin.) Doing a little computational complexity analysis, we can see that our naive, non-dynamic programming algorithm has a ludicrous runtime of F( n) = O( 2n). Meanwhile, our dynamic programming algorithm has a runtime of Fdp( n) = O( n). The time savings are immense!

But, it is not all shiny. To get these time savings, we are sacrificing memory space. For the example of calculating Fdp( 1000), an array must be created containing 1000 elements. Thus, it has a space complexity that must be accounted for. The space complexity of Fdp( n) = O( n), which really is not that great of a price to pay for the speedup it makes possible.

And that is the power of dynamic programming-the ability to sacrifice memory space to significantly improve runtime. It is my opinion that recognizing when this is the case, and being able to act on it is one mark of an excellent programmer. So start looking around, problems where dynamic programming can be applied are fairly common (especially when comparing sets and sequences).