Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The bot always looks it up, in a way.




I mean, so do I, if you think about it like that. I just have a much lower chance of successfully retrieving the correct information.

In the comparison I was making with respect to accuracy was that the bot is much more likely to accurately answer fact-based queries, and much less likely to succeed at any tasks that require actual 'thinking'. Especially when that task is not particularly common in the training set, such as writing a memory allocator. I can write and debug a simple allocator in half an hour, no worries. I'd be surprised if any of the current LLMs could.


I agree. I was just making a tangential point with a bit of exaggeration; sorry if it seemed to distract from your main point.

If you look up the factual question in a quality source, you'll be more accurate than the bot which looked at many sources. That's all I meant.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: