Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs can't distinguish between context and prompt. There will always be prompt injections hiding, lurking somewhere.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: