Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Actually, it's kind of the same. LLMs don't have a "new memory" system. They're like the guy from Memento. Context memory and long term from the training data. Can't make new memories from the context though.

(Not addressed to parent comment, but the inevitable others: Yes, this is an analogy, I don't need to hear another halfwit lecture on how LLMs don't really think or have memories. Thank you.)





Context memory arguably is new memory, but because we abused the metaphor of “learning” rather than something more like shaping inborn instinct for trained model weights, we have no fitting metaphor what happens during the “lifetime” of the interaction with a model via its context window as formation of skills/memories.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: