In human language I think physical reality is always a few layers out. Language is social, first and foremost, and naming something is not neutral. We can hardly refer to single objects directly, we mostly do it through their class membership, which always, always include a whole range of associations, reductions, metaphor, etc. that are cultural.
In human language I think physical reality is always a few layers out.
Yes, the point is you can neglect physical (and logic) reality for a while in a stream of sentences. But not forever and that's where current NLP's output has it's limits. Just a simple level, a stream of glittering adjectives can be added to a thing and just add up to "desirable" unless those adjectives "go over threshold" and contradict each other and then the description can get tagged by the brain as a bit senseless.
It seems like at this point, there's no way to distinguish the coherent comments of a bot from a person. The bot could have written sentence X for just about any X. It's just that bots can't sustain stream of logical claims that are consistent with each other.
So it's easier to demonstrate a paragraph is done by a bot, ie, that a paragraph makes no sense, than it is demonstrate that paragraph is not done by a bot - since both humans and bots write some sensible paragraphs.
Still, I'm egotistic enough to think a bot couldn't come up with that argument though I could be wrong.
Much as I'm assuming the person you're responding to was joking, I've encountered a number of comments/commenters where I felt the same way I felt about GPT output.
The best way I can describe the feeling is that it reminds me of conversations and friendships I've had with schizophrenics, people in the process of having a psychotic breakdown, and people with alzheimers.
There's a feeling that what they're saying is not entirely non-sensical, a feeling of 'catching up' to what they're trying to say (akin to translating in a language one isn't too proficient in). But reflecting on the conversation, I find myself wondering how much I managed to understand what they were trying to convey, and how much it was just my brain trying to make sense of something that ultimately doesn't.
'Understanding' or 'communication' aside, I've often valued these kinds of conversations because they tickle the more free-associative side of my own thinking, and the results, however I/we got there, were useful to me.
As a result, I'm much more interested in how these developments in 'AI' might augment this creative process than I am in how they might convincingly appear human. Not that the latter isn't interesting too, though.