> Yeah, an incidentally-optimal-with-a-certain-architecture algorithm is great as long as you can count on the architecture remaining constant.
Choosing an algorithm based on the architecture it will run and the data sets it will operate on on is the right thing to do, period. That's engineering.
If you have a different architecture, you choose a different algorithm. You don't go looking for some mutant yeti super-algorithm that somehow runs well on all of them. This is just a symptom of computer science's obsession with generalization, which is rather counter-productive.
> Of course, a lot of algorithms are going to GPUs and how these structures relate to GPU processing is another thing I'd be curious about.
GPUs are a good example for how you need to craft an algorithm to fit the architecture, not the other way around. The way you can program GPUs today is different from five and ten years ago.
Choosing an algorithm based on the architecture it will run and the data sets it will operate on on is the right thing to do, period. That's engineering.
If you have a different architecture, you choose a different algorithm. You don't go looking for some mutant yeti super-algorithm that somehow runs well on all of them. This is just a symptom of computer science's obsession with generalization, which is rather counter-productive.
> Of course, a lot of algorithms are going to GPUs and how these structures relate to GPU processing is another thing I'd be curious about.
GPUs are a good example for how you need to craft an algorithm to fit the architecture, not the other way around. The way you can program GPUs today is different from five and ten years ago.