Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We don't generate chains of tokens with a constant error rate so errors don't pile up. Don't ask me what we do instead for I have no clue but whatever it is, it works better than next token prediction.

Hey, maybe humans aren't just like LLMs after all.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: