Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

“The whole wolpertinger thing” is a metaphor and a literary device. This isn’t a technical manual.

I don’t know why you found this hard to read. The writing is clear and understandable. The dismissiveness of your comment and the fact that its out-of-handedness is based on nothing objective suggests that you’re not the target audience, especially since, in contrast to your comment, the thoughts in this article are researched, decently sophisticated, and exposed in a discursive manner.



I found the article to be meandering, unclear, messy, and not based on an active appraisal of the way progress in AI work already is judged. I don’t know why you claim my comment has “dismissiveness” — it does not. Pointing out that the failures of the writing or arguments makes it hard to read and lacking in any useful conclusion or call to action is not dismissive at all. On the contrary, I gave up a lot of time to interact with the essay by reading it and reflecting on it. It’s just not a good essay.


not based on an active appraisal of the way progress in AI work already is judged.

I don't think that's accurate at all. From the article:

"Adjacent to engineering is the development of new technical methods. This is what most AI people most enjoy. It’s particularly satisfying when you can show that your new system architecture does Z% better than the competition."

The ImageNet competition results from 2012 was the major turning point that exploded AI research, specifically in that computer vision was able to beat human level classification. Similarly for Chess previously and more recently Go.

Goodfellow's work with GANs and Pearl's work in Baysian Causality are the only major exceptions I see right now that are not based on the competitive improvement around a baseline. No other major scientific field approaches it this way.


I disagree very strongly. Many fields over long periods of the history of science have oriented themselves around benchmark problems.

Some things which come to mind are:

- C. elegans for connectomics

- Drosophila experiments for a wide range of biology benchmarks

- even previously in computer vision there was the so-called "chair challenge" [0], and dozens and dozens of canonical face detection, object detection, and segmentation data sets used frequently as benchmarks across many papers

- in Bayesian statistics there are various canonical data sets for evaluating theoretical improvements in hierarchical models and general regression

- in finance there is CRSP and the Kenneth French Data Library

It's very common across many fields to orient around benchmark problems and data sets, and it has been for a really long time. This is not at all new with ImageNet, not even just in the tiny world of computer vision.

[0]: < http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.226... >




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: