I wouldn't say "dumb down" but it definitely needs to explain why it took some lines of reasoning. With deep learning, you need to rebuild the whole system with different test-cases to change a minor behavior.. but imagine if we could just say "Why did you do that? XYZ. And adjust it: "Oh, gotcha. You can't because of ABC", and then the AI has that problem solved. I guess that would be the next step in AI. I think it's called symbolic reasoning.
Here's a very good article: http://dustycloud.org/blog/sussman-on-ai/ (A conversation with Sussman on AI and asynchronous programming)