Radix is better if for large numbers, especially when you know the range of the numbers.
Calculation fix:
Radix is O(kN) where k is the number of digits in the largest number. (Actually it's about d*k*N where d is the digit base, number of buckets that will be used ... Alphabet = 26, decimal = 10 , binary = 2)
Maxint = 4,294,967,296
32 bits: k = 32 / log(d)
Base 10 Radix:
d*k*n = 10*10*n < nlogn .... 100 < logn ... n > 2^100
Base 2 Radix:
d*k*n = 2*32*n < nlogn .... 64 < logn ... n > 2^64
So for 32bit numbers if you have more than 2^64 numbers n*k*N is better than nlogn
But, if you know that the range will be up to 1024 , and not MAXINT for example:
MaxNumber = 1024
Base 10 Radix:
d*k*n = 10*4*n < nlogn .... 40 < logn ... n > 2^40
Base 2 Radix:
d*k*n = 2*10*n < nlogn .... 20 < logn ... n > 2^20
So for numbers up to 1024 if you have more than 2^20 numbers n*k*N is better than nlogn
Because big O notation discards
multiplicative constants on the
running time, and ignores efficiency
for low input sizes, it does not
always reveal the fastest algorithm in
practice or for practically-sized data
sets, but the approach is still very
effective for comparing the
scalability of various algorithms as
input sizes become large.