But human language doesn't stop being "much much more challenging" if you decide not to engage.
Sometimes (and this can even be an admirable choice) in some specialist applications it's acceptable to decide you won't embrace the complexity of human language. But in a lot of places where that's fine we already did this with the decimal digits such as in telephone numbers, or UPC/EAN product codes, so we don't need ASCII.
In most other places insisting upon ASCII is just an annoying limitation, it's annoying not being able to write your sister's name in the name of the JPEG file, regardless of whether her name is 林鳳嬌 or Jenny Smith, and it jumps out at you if the product you're using is OK with Jenny Smith but not 林鳳嬌.
You might think well, OK, but there weren't problems in ASCII. The complexity is Unicode's fault. Think about Sarah O'Connor? That apostrophe will often break people's software without any help from Unicode.
Your sister's name doesn't render in my browser (stable Firefox on Linux 5.6). I'm sure I'm missing a fontpack or something. Again, I'm not saying ASCII is the solution, I'm saying Unicode is much more difficult to get right, and maybe we should call it something other than "plain text", since we already had a generally accepted meaning for that for many years. I'm usually in favor of making a new name for a thing rather than overloading an old name.
Firefox does full font fallback. So this means your system just isn't capable of rendering her name (which yes you might be able to fix if you wanted to by installing font packages). If you don't understand Han characters that's an acceptable situation, the dotted boxes (which I assume rendered instead) alert you that there is something here you can't display properly but if you know you can't understand it even if it's displayed there's no need to bother.
It really is just plain text. Human writing systems were always this hard, and "for many years" what you had were separate independent understandings of what "plain text" means in different environments, which makes interoperability impossible. Unicode is mostly about having only one "plain text" rather than dozens.
It is not mandatory that your 80x25 terminal learn how to display Linear B, you can't read Linear B and you probably have no desire to learn how and no interest in any text written in it. But Unicode means your computer agrees with everybody else's computer that it's Linear B, and not a bunch of symbols for drawing Space Invaders, or the manufacturer's logo, if you fix a typo in a document I wrote that has some Linear B in it, your computer doesn't replace the Linear B with question marks, or erase the document, since it knows what that is even if you can't read it and it doesn't know how to display it.
But I'm not saying we shouldn't engage, I'm just pointing out that the catalog of lil pictures is the easy part of the task.
One way I put it is, imagine if one of the first-class outputs of the Unicode Consortium was standard libraries for different human languages for different computer languages.
Sometimes (and this can even be an admirable choice) in some specialist applications it's acceptable to decide you won't embrace the complexity of human language. But in a lot of places where that's fine we already did this with the decimal digits such as in telephone numbers, or UPC/EAN product codes, so we don't need ASCII.
In most other places insisting upon ASCII is just an annoying limitation, it's annoying not being able to write your sister's name in the name of the JPEG file, regardless of whether her name is 林鳳嬌 or Jenny Smith, and it jumps out at you if the product you're using is OK with Jenny Smith but not 林鳳嬌.
You might think well, OK, but there weren't problems in ASCII. The complexity is Unicode's fault. Think about Sarah O'Connor? That apostrophe will often break people's software without any help from Unicode.