Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm continuously surprised that some people get good results out of GPT models. They sort of fail on my personal benchmarks for me.

Maybe GPT needs a different approach to prompting? (as compared to eg Claude, Gemini, or Kimi)





They are all gpt as in generative pre-trained transformer

That may or may not be true, but in the context of this article, I'm referring to OpenAI's GPT brand of models.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: