Will humans be able to keep up with the depth of analysis these AIs will have, or will it become a problem for the AI to dumb down its thinking in order for us to grasp it?
More generally, scientists using AI for research will probably have to do research on the research, to understand what the AI discoveries mean. Maybe they mean something we can't grasp at all, in which case they go completely over our heads, like ants trying to learn about the finer points of financial markets. We will probably have to learn new concepts and even new languages designed by the AI to convey the meaning.
I wouldn't say "dumb down" but it definitely needs to explain why it took some lines of reasoning. With deep learning, you need to rebuild the whole system with different test-cases to change a minor behavior.. but imagine if we could just say "Why did you do that? XYZ. And adjust it: "Oh, gotcha. You can't because of ABC", and then the AI has that problem solved. I guess that would be the next step in AI. I think it's called symbolic reasoning.