It's laughable that LLMs can be considered more accurate than human operators at the macro level. Sure, if I ask a search bot the date Notre Dame was built, it'll get it right more often than me, but if I ask it to write even a simple heap memory allocator, it's going to vomit all over itself.
> Nobody [...] will ever care if the software was written by people or a bot, as long as it works
Yeah.. wake me up when LLMs can produce even nominally complex pieces software that are on-par with human quality. For anything outside of basic web apps, we're a long way off.
I mean, so do I, if you think about it like that. I just have a much lower chance of successfully retrieving the correct information.
In the comparison I was making with respect to accuracy was that the bot is much more likely to accurately answer fact-based queries, and much less likely to succeed at any tasks that require actual 'thinking'. Especially when that task is not particularly common in the training set, such as writing a memory allocator. I can write and debug a simple allocator in half an hour, no worries. I'd be surprised if any of the current LLMs could.
> more accurate in many cases
It's laughable that LLMs can be considered more accurate than human operators at the macro level. Sure, if I ask a search bot the date Notre Dame was built, it'll get it right more often than me, but if I ask it to write even a simple heap memory allocator, it's going to vomit all over itself.
> Nobody [...] will ever care if the software was written by people or a bot, as long as it works
Yeah.. wake me up when LLMs can produce even nominally complex pieces software that are on-par with human quality. For anything outside of basic web apps, we're a long way off.