I think one answer is that they'll have moved farther up the chain; agent training is this year, agent-managing-agents training is next year. The bottom of the chain inference could be Qwen or whatever for certain tasks, but you're going to have a hard and delayed time getting the open models to manage this stuff.
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".
Futures like that are why Anthropic and oAI put out stats like how long the agents can code unattended. The dream is "infinite time".