Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t buy that LLMs won’t make off-by-one or memory safety errors, or that they won’t introduce undefined behavior. Not only can they not reason about such issues, but imagine how much buggy code they’re trained on!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: