Mar 042013
 

Sorting.  That perennial problem that forms the backbone of every introductory Algorithms class.  Now, hardly anyone ever has to code up their own sorting algorithm, as great implementations are included in the standard libraries for every language worth naming.  But, ask anyone who passed an algorithms class in college which particular sorting algorithm to use on a large unsorted array of real numbers, and they would probably answer Quicksort or some other O(n \log n) algorithm.  Give this hypothetical computer scientist a half hour and they would most likely be able to produce a reasonable facsimile of Quicksort on the whiteboard, even if they had not thought about it since their college days.

But O( n \log n) sorting algorithms are not the end of sorting.  There is a certain class of sorting problems that can run faster.  Much faster.  In certain circumstances, you can sort an array in O( n).  That's linear time!  The speedup over naively using Quicksort on these problems is immense.

So how does linear sort work?  The central concept goes back to one of my favorite topics in algorithms--techniques that trade memory space for speed.  The restriction for linear sort is that it can only be done when the numbers being sorted are both bound and countable.  An example of such a set of numbers is integers within some defined range.  Instead of shuffling items around an array like is done in Quicksort, the linear sorting algorithm creates a new array to keep a count of each of the numbers in the source range encountered, and then using these counts to create the final sorted array.

Let us now add some specificity to this discussion.  We will start with an unsorted array \mathbf{X}= \langle x_1, x_2, ..., x_n \rangle in which every item x_i is an integer in the range x_{min} \leq x_i \leq x_{max}.  We define our new array of counts \mathbf{C} over the range [ x_{min}, x_{max}]. Finally, we return the sorted array \mathbf{Y}.

Here is the pseudocode for this algorithm:

Linear-Sort ( \mathbf{X})
    x_{min}= \min( \mathbf{X})
    x_{max}= \max( \mathbf{X})
    n= \text{size}( \mathbf{X})

    \mathbf{C}= new array( x_{min}, x_{max})

    for each x_i \in \mathbf{X}
        \mathbf{C}_{x_i}= \mathbf{C}_{x_i}+ 1
    end

    \mathbf{Y}= new array of size n
    i= x_{min}
    j= 1
    while j \leq n
        if \mathbf{C}_{i}> 0
            \mathbf{Y}_{j}= i
            \mathbf{C}_{i}= \mathbf{C}_{i}- 1
            j= j+ 1
        else
            i= i+ 1
        end
    end

    return \mathbf{Y}
end

The code above starts by finding some information about the unsorted array \mathbf{X}, and then create the array of counts \mathbf{C}.  After that, we have a loop which goes through all n elements of \mathbf{X} and increments each count in \mathbf{C} as needed.  Finally, we have the last loop which goes through all r= x_{max}- x_{min} elements in \mathbf{C} to create the output array \mathbf{Y}.  Overall, the runtime of the entire algorithm is then O( \max( n, r)).

Of course, we are sacrificing space to get this linear runtime.  We can see that we need an r length array \mathbf{C} to hold the counts.  Therefore, the memory space required is O( \max( n, r) as well.

A few final thoughts:

  • I glossed over the array indexing for \mathbf{C} to make the algorithm description readable.  It's unlikely that a programming language would support exactly what I have written above--you would instead have to create a mapping for the index from [ x_{min}, x_{max}] to [ 1, r] and use that for indexing the \mathbf{C} array.
  • If the range r is large compared to the number of items in the array n, it probably makes sense to use a hash table instead of an array to keep track of the counts (\mathbf{C} above).  That would save memory space since \mathbf{C} would be sparse.  It would also save time by shortening the r length final loop that produces the output array \mathbf{Y}.  Note that the exact space and time needed in this case ultimately rely on how the hash table being used is implemented, which is why I have not provided a final Big-O.
  • For another example of trading memory space for speed, check out my post on dynamic programming.
Jan 282013
 

When I taught Computer Science at university, dynamic programming was a topic that I thought was particularly important to cover in the Algorithms courses. The idea was to instill in the students the understanding that memory (space) can be exchanged for speed (at very favorable exchange rates, in some computational situations). When you have a problem that it can be applied to, dynamic programming is an amazingly powerful technique, capable of reducing the runtime needed to find a solution from absurd exponential runtime (O(2n)) down to polynomial runtime (O(n2)).

Let that example runtime comparison sink in for a moment. If you need help understanding how huge this is, compare it for the case where n=1000. In this case, our absurd exponential runtime from a non-dynamic programming algorithm would be (on the order of) 21000 = 1.0715 ×10301, while our improved polynomial runtime from a dynamic programming solution would be (on the order of) 10002 = 106. The dynamic programming algorithm would therefore take phenomenally less time to generate a solution. And the improvement only gets better as the problem you are addressing gets bigger. Where it works, dynamic programming pays off in a big way.

So, how does dynamic programming work? At it's most fundamental, it focuses on problems which can be defined recursively, but which at different points in program execution, calculate the same piece of information again and again and again and again... This happens because for one reason or another, the algorithm must call a function multiple times with the same parameters. In a dynamic programming solution however, these repeated calculations are instead performed only once, and their solution is stored within an array or hash table. Then, when the solution to those calculations is required, it is merely looked up. Instead of recalculating a repeated value at potentially great cost, a cheap lookup operation is performed instead.

Saving solutions while the program is executing and keeping them until they are needed later takes memory space. And this is where the tradeoff between space and time comes from. By sacrificing space to save answers you need in the future, time is not needed to recalculate them every time they are needed.

The Fibonacci sequence is a classic example that demonstrates the power of dynamic programming. For this discussion, we will ignore the fact that a closed-form solution exists, and instead focus on its recursive definition. The nth Fibonacci number is defined as F(n) = F(n−1)+ F(n−2) for all n > 1, and F(0)=F(1)=1. So now, we write a simple program that encapsulates this formula:

F( n)
    if( n== 0|| n== 1)
        return 1
    endif
    return F( n- 1)+ F( n- 2)
end

We can see that if a simple program was written recursively following that definition, many calculations will be repeated. For instance, if we wanted to calculate F( 1000), then F( 999) is calculated once, F( 998) twice, F( 997) three times, and so forth following the Fibonacci sequence until F( 2) is calculated roughly 2.7 ×10208 times. This is a LOT of repeat calculations! We actually need to go through that little program a grand total of 7 ×10208 times to calculate F( 1000).

However, we can take that little program's repetitive nature into account and create a dynamic programming algorithm to take its place. For this, we create an array and store Fibonacci numbers we have already calculated before moving on. The dynamic programming algorithm is then:

Fdp( n)
    fib= array( n);
    fib( 0)= 1;
    fib( 1)= 1;

    for ii= 2 to n
        fib( ii)= fib( ii- 1)+ fib( ii- 2);
    end

    return fib( n);
end

If we were to walk through the dynamic programming algorithm to calculate Fdp( 1000), we can see that we calculate Fdp( 0) once, Fdp( 1) once, Fdp( 2) once, and so forth until Fdp( 1000) is calculated...just once. The time savings are enormous! (If you don't believe me, code it up and give it a spin.) Doing a little computational complexity analysis, we can see that our naive, non-dynamic programming algorithm has a ludicrous runtime of F( n) = O( 2n). Meanwhile, our dynamic programming algorithm has a runtime of Fdp( n) = O( n). The time savings are immense!

But, it is not all shiny. To get these time savings, we are sacrificing memory space. For the example of calculating Fdp( 1000), an array must be created containing 1000 elements. Thus, it has a space complexity that must be accounted for. The space complexity of Fdp( n) = O( n), which really is not that great of a price to pay for the speedup it makes possible.

And that is the power of dynamic programming-the ability to sacrifice memory space to significantly improve runtime. It is my opinion that recognizing when this is the case, and being able to act on it is one mark of an excellent programmer. So start looking around, problems where dynamic programming can be applied are fairly common (especially when comparing sets and sequences).