Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I suppose the NLP idea has been for first RNN and LSTM and now that Transformer and friends to act as a kind of memory. Mostly those memories have been pretty black box however, and it is interesting if ideas from data structures etc. could be useful.


Ideas from "data structures" will keep you stuck in the problem GP is talking about. The problem being lamented isn't that we don't know clever tricks for representing the training that a machine has undergone (as well as various tricks for having 'memory'), it's that for there to be a meaningful output for certain domains, there needs to be a thoughtfully implemented model that codes what it is to be (for example) a DM. Said model is, of course, ridiculously complex, so we won't see anyone coding it by hand anytime soon. We'd sooner have ways to copy the relevant bits of a human brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: