If you’re forced to use comparative sorting and write it by hand and it’s near random etc then Quicksort isn’t that bad but even then there’s better options.
And it is hard to beat, which is why the standard libraries of languages like C++, Rust, Go, C#, Java (etc) use some quicksort variant for their unstable sorts (and some mergesort variant when they need stability).
All of the above aren't forced to use comparison sorts, they just do; no other caveats required.
You mentioned it was a comparisons sort, but not that you were only comparing it with other comparison sorts.
There’s also a huge caveat, libraries know less about your data than you do. Thus different default choices become optimal but any decent library will give you many options for very good reasons.
Quicksort is relatively terrible for partially ordered lists like appended timestamps from multiple processes etc etc. It’s only ok at a very narrow range of problems, and the important bit isn’t the implementation but where those borders are.
> Quicksort is relatively terrible for partially ordered lists like appended timestamps from multiple processes etc etc.
It's not, not really: pdqsort handles those organically.
I'm sorry if "best sort", "so much faster" and "very hard to beat" felt like baits. But "huge caveat", "relatively terrible" and being "forced to use comparative sorting" are not fair descriptions of quicksort or why it's used and chosen by standard libraries.
Regardless, my point was that there's a lot to learn from quicksort. It's not “a shame” that it must be taught, and mergesort is not “best.”
Mergesort is best because it is the most elegant and beautiful sorting algorithm. Just merge lists so that they stay sorted.
Quicksort has all sorts of nonsense bouncing around and picking pivots. Eww. Terrible.
I will admit that I could have been more clear that I was evaluating algorithms based on beauty, but given that info I’m sure the conclusion is obvious.
Now build me a selection algorithm out of mergesort.
Just the other day I sped up a process by an order of magnitude because someone was sorting to do weighted median selection. Which is easy to build on top of std::nth_element. Top-k is another common need.
Quicksort is still worthy today also because of this.
Or you could just have people study QuickSelect and gain the same benefit without wasting on a slower sorting method when taking into account cache etc.
pdqsort fails to detect many partially ordered lists.
So you might argue about the scale of “Huge caveat” but trying to discover information about a list takes computing cycles. Even just pdqsort vs pdqsort_branchless exists due to that exact caveat, it’s fundamental to the nature of the problem. There’s infinitely ways data could have an inherent pattern which would speed up the process of sorting it and no way to algorithmically notice all patterns in such a way as to universally speed up sorting.
As to what there is to learn from Quicksort. I think it’s a poor introduction to algorithms not because it’s of the nature as an algorithm, but early on its many pitfalls draw attention away from more useful topics. Later on it’s simply not complex enough to be worth much attention. So sure in an academic context it looks really appealing, yet when you dig into what the point of teaching algorithms it’s much harder to justify. It’s covered so frequently you’re not even going to see it on interviews.
radix sort
If you’re forced to use comparative sorting and write it by hand and it’s near random etc then Quicksort isn’t that bad but even then there’s better options.