Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI solving international math olympiad problems is not intelligence

But couldn't it be overfitting? LLMs are very good at deriving patterns, many of which humans simply can't tell apart from noise. With a few billion parameters and whatever black magic is going on inside CoT, it's not unreasonable to think even small amounts of fine-tuning combined with many epochs of training would be enough for it to conjure a compressed representation of that problem type.

Without an extensive audit, I'd be skeptical of OpenAI's claims, especially given how o1 is often wrong on much more trivial compositional questions.

What defines intelligence is generalization, the ability to learn new tasks from few examples, and while LLMs have made some significant progress here, they are still many orders below a child and arguably even many animals.



I suspect that's actually what's going on, LLMs are finding patterns that apply to their question and figure out how to combine them in the correct way. However, I'd also say this is how vast majority of humans solve math problems. What I've seen from o1/R1 is that they are more capable at this process than the average human, more capable than the vast majority of humans.

We can say that they're not "intelligent" because they're not capable of solving problems they can't map to something in their training at all, but that would also put 99.9% of humanity in the unintelligent bucket.


A trained LLM can learn from a few examples.

A human takes 14+ years until it's intelligent, also requires extensive training.


6 year olds can fairly reliably count the number of occurrences of a letter in a word, at least according to the school system I attended. LLMs will never be able to do it due to their inherent limitations (being statistical next-word predictors)


It is a calculation not learning


the human, or the LLM?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: