Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Stochastic parrots" -- have you seen, e.g., the examples in the PaLM paper of how it does on "chained inference" tasks? I don't see how you can classify that as mere parroting.


"Stochastic parrots" is a disparaging term coined by SJW propaganda. As if the brain is not stochastic, or we don't parrot from cultural sources. Language models have been accused of bias and lack of explainability, but humans are biased too and can't really explain how we take decisions.

Overall this term says "limited to the intelligence of a parrot" which is false, models can solve math and coding problems, generate passable art, translate and speak in hundreds of languages and beat us at all board and card games. When was a parrot able to do that?


The math the models are doing are similar to rote rule chaining as opposed to calculation. The errors made look like kludged together lookups. I wonder if you could sequence the training of a model so that you could reinforce calculations over lookups, to encourage the development of an accurate and advanced mathematics module.

Neural networks can do math, but a lookup and memorized value model is structurally a lot different than a calculator model. The difference between them is a matter of weights for any given architecture. Tokenizing properly for math would help, but doing bit level tokenizing would be best, because that would allow multimodal domains to integrate more readily (i.e. audio/video/text models could share learned features more easily than if you are using parsed or domain specific tokens.) It's a great time to be alive.


> it does on "chained inference" tasks

To me, it is more proof of "stochastic parrot" behavior: model seen most of the available math information in internet, and even with significant computational power, can solve only 58% of elementary school level questions, and they were probably those with clear examples in training data, and can't generalize on those beyond.


In limited, often zero or one shot probing of the model, yes. Do multiple generations and recursive passes over the output to have the model select and iterate on a target and the utility goes way up. You can coax great output from small models, even the 125m parameter gpt-neo.

The process kinda goes like this -

Think of ten answers to this question: blah blah blah

From these ten answers, which are the best 3?

Of the three answers, which is the best?

Revise and edit the best answer to be simpler or more understandable.

Prompt engineering is a nascent field, and we haven't seen nuanced or sophisticated use of the tool yet. Most of the metrics reported in papers are barely better than a naive Turing test. It doesn't take much introspection to know that even humans endlessly iterate and revise their output, and the best extemporaneous speech doesn't match well curated and edited material. It shouldn't surprise us that similar editing and revision processes will benefit transformer output.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: