Optimization problems -- the bread and butter of machine learning for years. DNNs are certainly more powerful than many earlier-generation systems, but it's a quantitative difference, not a qualitative one. A DNN may have more neurons, more synapses, and access to more data, but it's not doing anything genuinely new.
A lot of hopes seem (to me) to have been pinned on the notion that neural nets (as we currently understand them) are the one true algorithm. This notion seems to have been fueled by the significant success of DNNs for certain (highly specific) problems, and by a (shallow) analogy with the human brain. However, it's becoming increasingly clear that this is not the case -- that an artificial neural net is an artificial neural net, no matter how many GPUs you throw at it.
From what I understand, the current bottlenecks for machine learning are:
- The lack of good data. Machine learning and DNN's specifically perform best with large datasets, that are also labeled. Google has open sourced some, but they (supposedly) keep the vast majority of their training data private.
- Compute resources. Training these datasets (which can be over terabytes in size) takes a lot of computational power, and only the largest tech companies (e.g. Google, Facebook, Amazon) have the capital to invest in it. Training a neural net can take a solo developer weeks or months of time while Google can afford to do it in a day.
There are actually a lot of advances being made in the algorithms, but iteration cycles are long because of these two bottlenecks and only large tech companies and research institutions have the resources to spend overcoming those bottlenecks. Web development didn't go through a renaissance until web technology became affordable and accessible to startups and hobbyists from reduced server costs (via EC2 and PaaS's like Heroku).
By that analogy, I think we're still in the early days of machine learning and better developer tools and resources could spur more innovation.
I don't have the impression that serious researchers regard them as a One True Algorithm, or as sufficient in their own right for development of human-level AI. Why do you believe that?
I'm not claiming that they do, although AI researchers who focus on DNNs certainly have a vested interest in accentuating their capabilities -- particularly when they have industry ties. I'm referring more to intellectual trends in Silicon Valley at large.