Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Stochastic parrots" is a disparaging term coined by SJW propaganda. As if the brain is not stochastic, or we don't parrot from cultural sources. Language models have been accused of bias and lack of explainability, but humans are biased too and can't really explain how we take decisions.

Overall this term says "limited to the intelligence of a parrot" which is false, models can solve math and coding problems, generate passable art, translate and speak in hundreds of languages and beat us at all board and card games. When was a parrot able to do that?



The math the models are doing are similar to rote rule chaining as opposed to calculation. The errors made look like kludged together lookups. I wonder if you could sequence the training of a model so that you could reinforce calculations over lookups, to encourage the development of an accurate and advanced mathematics module.

Neural networks can do math, but a lookup and memorized value model is structurally a lot different than a calculator model. The difference between them is a matter of weights for any given architecture. Tokenizing properly for math would help, but doing bit level tokenizing would be best, because that would allow multimodal domains to integrate more readily (i.e. audio/video/text models could share learned features more easily than if you are using parsed or domain specific tokens.) It's a great time to be alive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: