This sounds interesting. What about the agent behavior itself? How it decides how to come at a problem, what to show the user along the way, and how it decides when to stop? Are these things you have attempted to grapple with in your framework?
2. A way to create specialised subagents that can use their own tool or their own model. The main agent can delegate to subagent exposed as a tool. Subagents don’t get confused because they have their own context window, tools and even models (mix and match Remote LLM with Local LLM if needed)
3. Don’t use all tools of the MCP servers you’ve added. Filter out and select only the most relevant ones for the problem you’re trying to solve
Other than AgentBuilder, CustomTool, HigherOrderTool, MetaTool, SubagentBuilder the framework does not try to control PydanticAI’s main agent behaviour. The high level approach is to use fewer, more relevant tools and let LLM orchestration and prompt tool references drive the rest. This approach has been more reliable and predictable for a given Agent based problem.