Because it's a way to avoid confronting the increasingly unavoidable fact that the AI renaissance DNNs were supposed to usher in is looking increasingly less impressive. Unsurprising, given that throwing more computing power at neural networks doesn't constitute a fundamental leap forward -- but disconcerting to a community that expected, and promised, far more than is being delivered.
Hold on a second. We're still in the very, very early stages here. We haven't even started to connect those networks together to make hierarchies.
You're speaking like someone watching the Wright brothers testing some of their earliest models, and going "supersonic flight my ass, you guys can't even fly across this football field".
What exactly do you think was 'promised and expected'? Because from here it looks like deep learning has delivered an awful lot more than what anyone expected. No one expected it to beat Go. No one expected it to achieve human level results on problems like image recognition. And no one expected all this to happen in just a few years.
NNs have made measurable and enormous progress in many different AI domains in a very short space of time. There are awesome new applications and improvements coming our every day.
It's easy to say, from the vantage point of hindsight bias, that everything that's happened was predictable. So what exactly do you expect from NNs and AI in the near future? Make some testable predictions.
I actually agree with you, as I feel that deep neural networks have exceeded expectations, but I like the guessing game, so I'll do a few predictions that, who knows, might be exceeded.
Fully autonomous vehicles (as in, all passengers can sleep) with less deaths than human drivers in 2020.
Realtime text-to-speech matching top humans, including proper intonation, in 2025.
Fully autonomous computer factories (as in, trucks deliver raw materials in containers at one location, and fetch the computers in containers at another) in 2035.
Optimization problems -- the bread and butter of machine learning for years. DNNs are certainly more powerful than many earlier-generation systems, but it's a quantitative difference, not a qualitative one. A DNN may have more neurons, more synapses, and access to more data, but it's not doing anything genuinely new.
A lot of hopes seem (to me) to have been pinned on the notion that neural nets (as we currently understand them) are the one true algorithm. This notion seems to have been fueled by the significant success of DNNs for certain (highly specific) problems, and by a (shallow) analogy with the human brain. However, it's becoming increasingly clear that this is not the case -- that an artificial neural net is an artificial neural net, no matter how many GPUs you throw at it.
From what I understand, the current bottlenecks for machine learning are:
- The lack of good data. Machine learning and DNN's specifically perform best with large datasets, that are also labeled. Google has open sourced some, but they (supposedly) keep the vast majority of their training data private.
- Compute resources. Training these datasets (which can be over terabytes in size) takes a lot of computational power, and only the largest tech companies (e.g. Google, Facebook, Amazon) have the capital to invest in it. Training a neural net can take a solo developer weeks or months of time while Google can afford to do it in a day.
There are actually a lot of advances being made in the algorithms, but iteration cycles are long because of these two bottlenecks and only large tech companies and research institutions have the resources to spend overcoming those bottlenecks. Web development didn't go through a renaissance until web technology became affordable and accessible to startups and hobbyists from reduced server costs (via EC2 and PaaS's like Heroku).
By that analogy, I think we're still in the early days of machine learning and better developer tools and resources could spur more innovation.
I don't have the impression that serious researchers regard them as a One True Algorithm, or as sufficient in their own right for development of human-level AI. Why do you believe that?
I'm not claiming that they do, although AI researchers who focus on DNNs certainly have a vested interest in accentuating their capabilities -- particularly when they have industry ties. I'm referring more to intellectual trends in Silicon Valley at large.