Interesting question. In general feature representation and preprocessing is a topic not really touched in literature. Most ML papers/books start analysis after having established the representation of the problem and I haven't really seen cases where they revisit/question the very first step. Usually after some first studies there is some consensus on what works best.
In your example you basically do a quantisation of a continuous variable. I don't think that having so many levels will make any difference, but having something like values 1-20, 21-40,... 81,100 (5 levels) may help. Also in similar techniques you actually put 0 to all features except the one that corresponds to your value. The reason is that you want your features as much uncorrelated as possible. This approach of preprocessing may add some value because you may help your classifier to generalise. It's like saying, "hey it doesn't really care if that value is from 0 to 20 treat this range as one". On the other hand, depending on the problem, many levels (up to continuous) may actually be better for your performance.
My overall advice is to try both approaches, compare and keep what's best for your problem. You can actually make a plot of performance vs. quantisation levels and see if there is a trend.