5

When working on "The fastest sort for BrainF***", I discovered this algorithm, which is O(N*k), where k is the maximum value in the input. It requires O(N) extra storage.

The physical analogy is that you have N stacks of tokens. The height of the stack represents the value to be sorted. (Each token represents a bit). Set aside space for another N stacks. You take one token off the top of each stack that has tokens, and then add one to each stack in the new set from right to left until your hand is empty. Repeat until all original stacks are empty. Now the new set is sorted ascending left to right

In C:

 void sort(int A[], int N)
 {
    int *R = calloc(N,sizeof(int));
    do {
      int i,count=0; 
      for (i=0;i<N;i++) if A[i] { count++; A[i]--;}
      for (i=0;i<count;i++) R[i]++;
    } while (count);
    memcpy(A,R,N);  //A is now sorted descending.
    free(R);
 }

Does this algorithm have a name? It seems similar to a Bead Sort, but I don't think it's quite the same.

4
  • Is it bad that I only understood the analogy having understood the code? ;-) I was mainly confused by the fact that it isn't the coins that you are sorting but the numbers represented by the count of the coins. So it doesn't matter which stack a coin ends up in, as long as there are the same number of stacks of the same size. But no idea what that kind of sort is called but it definitely sounds like it should have a name. ;-) Commented Jan 26, 2012 at 15:35
  • Tried to update the explanation. (When I started with the code on the other site, it got confused for a counting sort - I can't win :) Commented Jan 26, 2012 at 15:55
  • Its a very interesting question though. I'd say it was a bead sort except the wikipedia page for that says it is O(n^2) space complexity. I can't find a simple implementation of bead sort to properly compare it though. :) Commented Jan 26, 2012 at 16:10
  • Wikipedia's statements about algorithms and data structures are best taken with a grain of salt. I suspect, but am too lazy to prove, that one is simply a more space-efficient transformation of the other. Commented Jan 26, 2012 at 16:54

2 Answers 2

7

Turns out I wasn't too lazy after all. It is Bead Sort. Here's the definition from the original paper (PDF link):

Consider a set A of n positive integers. . . For all a in A drop a beads (one bead per rod) along the rods, starting from the 1st rod to the a'th rod. Finally, the beads, seen level by level, from the nth level to the first level, represent A in ascending order.

This implementation transforms that algorithm in two ways:

  1. Reflect the 'frame' in which it's working across the line y=x. This changes the result such that the number of 'beads' in each column represents the output sorted in descending order. In the original algorithm, the number of 'beads' in each row represents the output sorted in ascending order.
  2. Rather than representing the 'frame' as an 2-dimensional array of boolean values, represent it as a 1-dimensional array of integers. Each slot in the array corresponds to a 'rod', and its value represents the number of beads on that rod. This second bit is a natural transformation - it simply acknowledges that, since a 'bead' cannot float in mid-air, recording just the number of beads on the rod tells us all there is to know about how they are arranged on it. You place a bead on a rod by incrementing the corresponding number.

Here's some clarification on that first point, taken straight from the diagram on the paper's second page: As the algorithm is originally implemented, the array [3, 2, 4, 2] would be represented by a grid that looks like:

* * *
* *
* * * *
* *

And letting the beads fall produces:

* *
* *
* * *
* * * *

You then have to read the rows, from top to bottom, to get the output: [2, 2, 3, 4]. Whereas in the version that gives results in descending order, you are effectively doing this instead:

  *          *
  *   *      * *
* * * *  ->  * * * *
* * * *      * * * *
Sign up to request clarification or add additional context in comments.

4 Comments

I seem to be good at re-inventing other people's algorithms... Thanks for the analysis.
The transformation from 2-D to 1-D reduces the space requirements from N*k to N, which seems like a pretty good win.
Yes. I'm impressed that the paper goes to the trouble of detailing analog and digital hardware implementations, and a version using cellular automata (I guess this was 2002), but doesn't bother with a more straightforward programmatic implementation.
Ah, I think I understand now why wikipedia said it was n^2 space. I'm glad somebody was able to confirm though. ;-)
0

I know Radix Sort as one representative in the O(n*K) complexity.

http://en.wikipedia.org/wiki/Radix_sort

1 Comment

I don't think these 2 have much in common

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.