Note that random data is not a common case for sorting algorithms. It'd be interesting to see how the numbers change on partially-, mostly-, and fully-sorted data.
> What can we conclude from this discovery? Before we conclude anything, we should remind ourselves of its limitations. The tests run were on completely random data. Truly random data seldom occurs in real life.
it's true that it's not a common case, but it's probably the simplest objectively justifiable case; any particular case of the others requires more elaborate justification for privileging it. why 90% sorted instead of 80%, say? or why do we give 20% weighting to the randomized-data case and 80% weighting to the 90%-sorted case, rather than some other weighting?
and it does at least guarantee exploration of a good part of the algorithm's behavior space. i mean, bogosort is optimal on fully-sorted data, right? as long as you do the check for sortedness before the random shuffle instead of after it
it's important that your sort not explode on mostly-sorted or mostly-reverse-sorted data (myers's 'bog-standard quicksort' uses the last element as the pivot and consequently goes quadratic), but beyond that, the particular weighting of relative importance to assign to random input vs. 99% sorted vs. 90% sorted vs. 90% reverse sorted—that requires some application-specific justification
aside from the ones you mentioned, another interesting case is a large array of records that all have equal keys. median-of-three quicksort or random-pivot quicksort does fine on mostly- or fully-sorted data, but still sucks when everything is equal!