Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Out of curiosity, is this a question you think Google is well-suited to answer^? How many Wikipedia pages will you need to open to determine the answer?

When folks are frustrated because they see a bizarre question that is an extreme outlier being touted as "model still can't do _" part of it is because you've set the goalposts so far beyond what traditional Google search or Wikipedia are useful for.

^ I spent about five minutes looking for the answer via Google, and the only way I got the answer was their ai summary. Thus, I would still need to confirm the fact.



Unlike the friendly bot, if I can’t find credible enough sources I’ll stay with an honest “I don’t know”, instead of praising the genius of whoever asked and then making something up.


Sure, but this is a false dichotomy. If I get an unsourced answer from ChatGPT, my response will be "eh you can't trust this, but ChatGPT thinks x"

And then you can use that to quickly look - does that player have championships mentioned on their wiki?

It's important to flag that there are some categories that are easy (facts that haven't changed for ten years on Wikipedia) for llms, but inference only llms (no tools) are extremely limited and you should always treat them as a person saying "I seem to recall x"

Is the ux/marketing deeply flawed? Yes of course, I also wish an inference-only response appropriately stated its uncertainty (like a human would - eg without googling my guess is x). But among technical folks it feels disingenuous to say "models still can't answer this obscure question" as a reason why they're stupid or useless.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: