NLP is a distraction. Computers won't be able to do NLP until they understand the real world. Parsing a sentence about snow without know what snow is will not be useful for a computer or a human.
Natural Language Processing is an abstraction? That's quite the statement, considering the recent advances that has been made. I mean, you don't see the value in a computer being able to perceive meaning when you say: "Siri, please dim the lights 40%"?
A computer doesn't need to know how you feel about snow, to tell you that it's snowing outside, and you'll probably want to remember snow chains.
You sound like you think you're disagreeing with rand_r, but all your text is in support of rand_r's point. You use one example that is basically solvable with regexes and in your second paragraph you appear to strongly agree that NLP isn't important at all.
I'm not saying vector spaces are a rich enough structure, but as an example...
When we place the word "snow" into a vectorspace of words (along with the other words), we're saying something about what that word means relative to the other words, with those relationships stemming from the experience of people in the real world. "Snow" is similar to "ice" along these dimensions and is similar to "powder" along those dimensions. Ideally, we want something like "snow = ice - solid + powder". All the possible equations for snow in the embedding gives you, essentially, the different ways in which people understand snow.
The utility of these structures for NLP is fundamentally that they embed something of the understanding people have for the real world in their embedding of the words. So NLP structures must capture something of "understanding" if they're genuinely useful for NLP.
So I agree with the person I was replying to in that you can't have NLP without understanding, because NLP is fundamentally about understanding, but I think they're implying a higher level of understanding than is actually required for most uses of NLP and more over, that it has to be an independent understanding, rather than a secondary derived one.
This is unsupervised learning of word embeddings that shows the "topics" related to the word "snow" and how closely they are related. It shows pretty reasonable understanding of the world - certainly enough to build useful things with it.
As someone who works in NLP I don't think there will be an AI winter. There will be a Gartner "valley of despair", abut no winter.
That doesn't mean NLP is solved though.