Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are even strong inapproximability results for some problems, like set cover.

"Neural networks are universal approximators" is a fairly meaningless sound bite. It just means that given enough parameters and/or the right activation function, a neural network, which is itself a function, can approximate other functions. But "enough" and "right" are doing a lot of work here, and pragmatically the answer to "how approximate?" can be "not very".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: