So; I think many people in our sector have a progress-bias from years of Moore's law etc. and run under the assumption that all things tech that display problems (in this case "incorrect information") will just resolve with time and progress.
People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?
Who is right in this case? I actually think that the LLM approach has limits that we could hit the wall of, and for that technique at least never get past the "I'm just making shit up" problem, and in fact the skeptics are quite right.
LLMs are exciting as an automatic language content generation tool, they chain words together in ways that sound like humans, and extract reasoning patterns that look like human reasoning. But they're not reasoning, it doesn't take much to trip them up in basic argumentation. But because they look like they're "thinking" some people with a tech-optimist bias get excited and just assume that the problems will resolve themselves. They could be very wrong, there could in fact be very strong limits to this approach.
... More worrisome is if LLMs because omnipresent despite having this flaw, and we just accept bullshit from computers the way we seemingly now accept complete bullshit from politicians and businessmen....
> People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?
I think it's a deal-breaker for many people inside tech either :-)
It looks like it may be useful in doing grunt work in the field where you are an expert and can check and correct anything that it produces.
Where people expect it to be useful though is in providing them with information they do not already know; and the fact that you cannot trust anything it says makes it unusable for this case.
Yep. CoPilot for example is great for making grunt code, filling in the blanks. But it's really crappy at anything that requires reasoning through a problem.
It is our responsibility as tech professionals to recognize this and explain it to laymen otherwise we're in trouble.
I've said it before, and I'll say it again: It's really really bad that these systems are made to speak in first person, that they're often given "names", that they use human voices, and that they present authority. This is irresponsible engineering from a social and ethics POV, and our "profession" such as it is, should be held to task for it.
People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?
Who is right in this case? I actually think that the LLM approach has limits that we could hit the wall of, and for that technique at least never get past the "I'm just making shit up" problem, and in fact the skeptics are quite right.
LLMs are exciting as an automatic language content generation tool, they chain words together in ways that sound like humans, and extract reasoning patterns that look like human reasoning. But they're not reasoning, it doesn't take much to trip them up in basic argumentation. But because they look like they're "thinking" some people with a tech-optimist bias get excited and just assume that the problems will resolve themselves. They could be very wrong, there could in fact be very strong limits to this approach.
... More worrisome is if LLMs because omnipresent despite having this flaw, and we just accept bullshit from computers the way we seemingly now accept complete bullshit from politicians and businessmen....