Most of the paper is a good joke, but Slowsort takes it a step further. It is just so inefficient I can’t help but laugh out loud each time I revisit the paper and re-understand the algorithm.
It ticks all the nice boxes, such as being easily proven correct, easy to implement, easy to parallelize, stable, no time wasted on unnecessary steps, can produce output as it sorts the rest.
At the same time, it’s Mergesort with a ridiculous merging step. But that means one can simply add a parameter to tune between Mergesort and Slowsort – if(rand(1,10) > 8) then slow else merge – so the code becomes tunable in terms of speed. Someone needs faster software? OK, I’ve got just the parameter for you :-D
If you haven’t implemented it yourself, give it a shot. It’s a transformative experience to see your computer struggle sorting an already sorted 100-entry array ;-)
I'd like to contribute an algorithm to this field of study. I call it, "Spend several months researching a novel algorithm when brute force would have been effective for all real-world problem sizes."
Didn't realize this was a joke. I was scratching my head, "There are engineering problems that intentionally aim to slow performance? Because if so all they have to do is just hire me..."
Some hash functions do this, e.g. bcrypt and scrypt. Theoretically, their output is no more secure than faster algorithms like SHA; but it's less practical to brute-force lots of guesses.
Hashcash (as found in Bitcoin) is another example, albeit more egregious.
"QWERTY was designed to slow typists down" appears to be somewhat disputed claim, at least going by Google results. There are a few articles [0, 1] based on a paper [2] saying that the design was more due to feedback from telegraph operators rather than anything to do with typewriter mechanics.
On the other hand, if you subscribe to the "reduced jams" view, then that would arguably have the effect of speeding up typing due to having to deal with jams less frequently.
Most of the paper is a good joke, but Slowsort takes it a step further. It is just so inefficient I can’t help but laugh out loud each time I revisit the paper and re-understand the algorithm.
It ticks all the nice boxes, such as being easily proven correct, easy to implement, easy to parallelize, stable, no time wasted on unnecessary steps, can produce output as it sorts the rest.
At the same time, it’s Mergesort with a ridiculous merging step. But that means one can simply add a parameter to tune between Mergesort and Slowsort – if(rand(1,10) > 8) then slow else merge – so the code becomes tunable in terms of speed. Someone needs faster software? OK, I’ve got just the parameter for you :-D
If you haven’t implemented it yourself, give it a shot. It’s a transformative experience to see your computer struggle sorting an already sorted 100-entry array ;-)