Mar 042013
 

Sorting.  That perennial problem that forms the backbone of every introductory Algorithms class.  Now, hardly anyone ever has to code up their own sorting algorithm, as great implementations are included in the standard libraries for every language worth naming.  But, ask anyone who passed an algorithms class in college which particular sorting algorithm to use on a large unsorted array of real numbers, and they would probably answer Quicksort or some other O(n \log n) algorithm.  Give this hypothetical computer scientist a half hour and they would most likely be able to produce a reasonable facsimile of Quicksort on the whiteboard, even if they had not thought about it since their college days.

But O( n \log n) sorting algorithms are not the end of sorting.  There is a certain class of sorting problems that can run faster.  Much faster.  In certain circumstances, you can sort an array in O( n).  That's linear time!  The speedup over naively using Quicksort on these problems is immense.

So how does linear sort work?  The central concept goes back to one of my favorite topics in algorithms--techniques that trade memory space for speed.  The restriction for linear sort is that it can only be done when the numbers being sorted are both bound and countable.  An example of such a set of numbers is integers within some defined range.  Instead of shuffling items around an array like is done in Quicksort, the linear sorting algorithm creates a new array to keep a count of each of the numbers in the source range encountered, and then using these counts to create the final sorted array.

Let us now add some specificity to this discussion.  We will start with an unsorted array \mathbf{X}= \langle x_1, x_2, ..., x_n \rangle in which every item x_i is an integer in the range x_{min} \leq x_i \leq x_{max}.  We define our new array of counts \mathbf{C} over the range [ x_{min}, x_{max}]. Finally, we return the sorted array \mathbf{Y}.

Here is the pseudocode for this algorithm:

Linear-Sort ( \mathbf{X})
    x_{min}= \min( \mathbf{X})
    x_{max}= \max( \mathbf{X})
    n= \text{size}( \mathbf{X})

    \mathbf{C}= new array( x_{min}, x_{max})

    for each x_i \in \mathbf{X}
        \mathbf{C}_{x_i}= \mathbf{C}_{x_i}+ 1
    end

    \mathbf{Y}= new array of size n
    i= x_{min}
    j= 1
    while j \leq n
        if \mathbf{C}_{i}> 0
            \mathbf{Y}_{j}= i
            \mathbf{C}_{i}= \mathbf{C}_{i}- 1
            j= j+ 1
        else
            i= i+ 1
        end
    end

    return \mathbf{Y}
end

The code above starts by finding some information about the unsorted array \mathbf{X}, and then create the array of counts \mathbf{C}.  After that, we have a loop which goes through all n elements of \mathbf{X} and increments each count in \mathbf{C} as needed.  Finally, we have the last loop which goes through all r= x_{max}- x_{min} elements in \mathbf{C} to create the output array \mathbf{Y}.  Overall, the runtime of the entire algorithm is then O( \max( n, r)).

Of course, we are sacrificing space to get this linear runtime.  We can see that we need an r length array \mathbf{C} to hold the counts.  Therefore, the memory space required is O( \max( n, r) as well.

A few final thoughts:

  • I glossed over the array indexing for \mathbf{C} to make the algorithm description readable.  It's unlikely that a programming language would support exactly what I have written above--you would instead have to create a mapping for the index from [ x_{min}, x_{max}] to [ 1, r] and use that for indexing the \mathbf{C} array.
  • If the range r is large compared to the number of items in the array n, it probably makes sense to use a hash table instead of an array to keep track of the counts (\mathbf{C} above).  That would save memory space since \mathbf{C} would be sparse.  It would also save time by shortening the r length final loop that produces the output array \mathbf{Y}.  Note that the exact space and time needed in this case ultimately rely on how the hash table being used is implemented, which is why I have not provided a final Big-O.
  • For another example of trading memory space for speed, check out my post on dynamic programming.

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>