I have to sort a large array of doubles of size 100000.
The point is that I do not want to sort the whole array but only find the largest 20000 elements in descending order.
Currently I am using selection sort. Any way to improve the performance?
I have to sort a large array of doubles of size 100000.
The point is that I do not want to sort the whole array but only find the largest 20000 elements in descending order.
Currently I am using selection sort. Any way to improve the performance?
100,000 is not a very large array on most modern devices. Are you sure you can't just sort all of them using a standard library sorting function?
You can avoid a full sort by using a variation of heapsort. Normally in a heapsort you build a heap of the entire data set (100,000 elements in your case). Instead, only allow the heap to grow to 20,000 elements. Keep the largest element at the top of the heap. Once the heap is full (20,000 elements), you compare each subsequent element of the data set to the top of the heap. If the next data set element is larger than the top of the heap, just skip it. If it's smaller than the top of the heap, pop the top of the heap and insert the element from the data set.
Once you've gone through the entire data set, you have a heap of the 20,000 smallest elements of the data set. You can pop them one-by-one into an array to have a sorted array.
This algorithm runs in O(N log K) time, where N is the size of the data set (100,000 in your example) and K is the number of elements you want to keep (20,000 in your example).
I'd suggest starting with bucket sort and then using some of the simpler algorithms to sort each bucket. If any of them is still too big, you can either use bucket sort again or another nlog(n) method (such as mergesort or quicksort). Otherwise, selection (or better, insertion) will do just fine.
Just for comparison: selection/insertion/quicksort is O(n*n), mergesort is O(nlog(n)), bucket sort is O(n*k), where k is the number of buckets. Choose k < log(n) and you'll get a better performance than the alternatives.
Note: quicksort's worst case scenario is O(n*n), but in practice it is much faster.
Update O(n*k) is the average performance for bucket sort, not the worst case, so the same note above applies.
If you use bubble sort algorithm and move to left smaller number, after 20.000th iteration there will be smallest numbers in the end of the array in descending order.
For example 3 7 2 5 1 4 8 array:
1 iteration: 7 3 5 2 4 8 1
2 iteration: 7 5 3 4 8 2 1
3 iteration: 7 5 4 8 3 2 1
After 3rd iteration there are 3 smallest elements in the end in descending order.
I recommend this because in this case complexity depends from number of elements you want to sort. And if you want to get small number of elements your program will work fast. Complexity is O(k*n) where k is number of elements you want to get.
You can get the first K sorted elements with a modified quicksort. The key is to realise that, once you've reordered your list around the pivot, you can forget about sorting the right-hand side if your pivot is ≥K.
In short, just replace the "right-hand" recursive call to quicksort() with
if (pivot >= k) quicksort(...)
Alternatively, you could follow the standard heapsort algorithm, but stop after pulling K elements from the heap.
Both of these approaches take O(N + KlogN) time, O(N) space, and can be done in-place.
you can improve by using Quick sort algorithm to improve its efficiency, or you can use merge sort that will do this in nlog(n) time. calculate boths running time and find which is suitable for your snario.