Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You may see that we try to suggest follow-up questions or question improvements where we think better context-in will result in a better result-out.

Curious what will happen if you modify the question to be more explicit?

I have seen that PMs and data-trained folk tend to be very articulate in asking for exactly what they want and that tends to lead to significantly better LLM responses.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: