Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I find more hallucination - like when you're taught as a child to reflect back the question at the start of your answer.

If I am not careful, and "asking the question" in a way that assumes X, often X is assumed by the LLM to be true. ChatGPT has gotten better at correcting this with its web searches.

I am able to get better results with Claude when I ask for answers that include links to the relevant authoritative source of information. But sometimes it still makes up stuff that is not in the source material.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: