Yes, I do not consider myself an expert, but reading this post just makes me think it is not EVEN wrong. It looks like self contradictory gibberish. I worry that a well trained ML tool could generate a million such posts of equally illucid commentary without substance. Whether the purpose of such discussion is to slow down understanding and scientific inquiry, I don't know, but the confusion sowed appears to have that effect.
Chess AI's still can't play like a human (but people are working on it), chess AIs were designed to solve problems.
The easiest way to pretend to write 'human like' for a computer is to say as little as possible with tiny contractions and poetic words.
Some humans similarly do this, if you get paid by the word it makes sense.
OP style fails on this topic because it's so technical.
GPT-3 wasn't even designing to mess with our heads, but it does, wait till someone designs it to. It's very possible, because it's not about intelligence, making it worse is easier than making it better. If you use engagement as a measure of 'success' you can see you can evolve GPT-3 quickly to a worst nightmare. (It is already a nightmare)
Exactly, I didn't think that it was auto generated, but as you say it is exactly the sort of non-sense that could be generated... of course you can have more than a million GPT monkeys so in this case one of them might actually come up with something clever!