Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Every time I fire a linguist, the performance of the speech recognizer goes up."

> It takes domain experts to judge whether or not a model correct, to identify the known and unknown unknowns and limitations of these models.

Arguably true, but I still claim the domain expert test-performance is below that of a modeling expert. No knowledge/preconceptions: Try it all, let evaluation decide. Expert domain knowledge/preconceptions: This can't possibly work!

Domain experts need to focus on decision science (what policies to build on top of model output). Data scientists need to focus on providing model output to make the most accurate/informed decisions downstream.



I'll be blunt: everytime I saw people try model something they don't understand, it boiled down to throwing stuff at the wall and see what sticks. Very best case, whatever stuck solved one special case without people realizing it was a speciap case.

Worst case, the stuff sticking was sheer luck, could have, and quite often was, identified prior of trying by domain experts, no lessons were drawn from the excercise and the resulting models were ignored by everyone except the modellers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: