That's simple - it is provably wrong. While relatively uncommon there are plenty of examples that would contradict this statement. And it's not about being able to encode the Rosetta Stone - non-scientists mix languages all the time, from Carmina Burana to Blinkenlights. They even make meaningful portmanteau words and write them with characters from multiple unrelated writing systems, like "заshitано" (see - Latin and Cyrillic scripts in the same single word!)
You miss the point. The basic unit of ASCII v2 (aka 'Unicode') should have been the codepage, not the codepoint. Having a stateful stream of codepage-symbol pairs is not a problem - in practice, all Unicode encodings ended up being stateful anyways, except in a shitty way that doesn't help to encode any semantic information.
A portmanteau of Russian "засчитано" ("credited", "taken into account", "check!") and English "shit".
The word is a joke and there is no well-defined meaning. I've seen it used as both "it counts but it's shitty" and as a way to give credit for a failure.
Surely that's not the best example, but it's a word I've remembered.
That's simple - it is provably wrong. While relatively uncommon there are plenty of examples that would contradict this statement. And it's not about being able to encode the Rosetta Stone - non-scientists mix languages all the time, from Carmina Burana to Blinkenlights. They even make meaningful portmanteau words and write them with characters from multiple unrelated writing systems, like "заshitано" (see - Latin and Cyrillic scripts in the same single word!)