I agree with the other comments that this is a fairly useless article. Perhaps I’m looking too much into this specific example, but I fail to see how the entirety of AI’s success (per this title) is misrepresented due to its inability to provide a horoscope (good example by another comment) that’s specific to some person in question.
We see a lot of articles that swing too far towards “AI will change everything!” just as much as “no, AI is not actually effective/meaningful!”. This is the latter. I’m surprised that anyone would genuinely try to use chatGPT/language LLMs this way.
I’m mainly enjoying chatGPT as a way to distill widespread information on the internet into a pseudo conversation. It’s great for learning new topics - my favorite is generating and explaining code snippets for languages/libraries I’m not familiar with.
> I fail to see how the entirety of AI’s success (per this title) is misrepresented
The title does not claim that the "entirety" of AI's success is misrepresented. It is questioning "how much" is, and I think that this is a fair question. The author does admit that the first paragraph in the example does have valid information which shows the AI with knowledge. It's the second paragraph that is just fluff.
I think that it is valid to ask how much this "fluff" impresses us and leads us to see the AI as being knowledgeable. Perhaps the author does draw too strong of a conclusion, but he still makes a good point.
> my favorite is generating and explaining code snippets for languages/libraries I’m not familiar with.
For me this is one of the more dangerous uses.
Humans are already pretty bad at detecting errors in code. Bertrand Meyer, an expert with some renown in formal methods, couldn’t find an error in a one-liner of Eiffel code generated by ChatGPT. What hope do programmers with less training have to recognize when ChatGPT has given them an incorrect summary?
What hope do Jr programmers have in figuring out the own errors they have in their own code they've written?
I work in the code security industry, and there is one truism, that is people typically write code until it compiles and or doesn't return an immediate error, they do not write code until it is 'correct'.
We see a lot of articles that swing too far towards “AI will change everything!” just as much as “no, AI is not actually effective/meaningful!”. This is the latter. I’m surprised that anyone would genuinely try to use chatGPT/language LLMs this way.
I’m mainly enjoying chatGPT as a way to distill widespread information on the internet into a pseudo conversation. It’s great for learning new topics - my favorite is generating and explaining code snippets for languages/libraries I’m not familiar with.