I mean, so do I, if you think about it like that. I just have a much lower chance of successfully retrieving the correct information.
In the comparison I was making with respect to accuracy was that the bot is much more likely to accurately answer fact-based queries, and much less likely to succeed at any tasks that require actual 'thinking'. Especially when that task is not particularly common in the training set, such as writing a memory allocator. I can write and debug a simple allocator in half an hour, no worries. I'd be surprised if any of the current LLMs could.