Or (4) LLMs simply do not work properly for many use cases in particular where large volumes of trained data doesn't exist in its corpus.
And in these scenarios rather than say "I don't know" it will over and over again gaslight you with incoherent answers.
But sure condescendingly blame on the user for their ignorance and inability to understand or use the tool properly. Or call their criticism low-effort.
Or (4) LLMs simply do not work properly for many use cases in particular where large volumes of trained data doesn't exist in its corpus.
And in these scenarios rather than say "I don't know" it will over and over again gaslight you with incoherent answers.
But sure condescendingly blame on the user for their ignorance and inability to understand or use the tool properly. Or call their criticism low-effort.