To hasten security vendor response is usually the most important reasons. Software vendors, especially corporate ones, will happily keep a known vulnerability under embargo for years if you let them.
I'm not asking why vulnerabilities should be published, but rather why they should be published before an update is in place. Even the page you linked now explained the importance of waiting 30 days after a fix is in place and even allow a 14 days extension in the case of update cycles.
In the case of the Chrome update, the fix is rolling out during the coming "days/weeks" and you originally complained about the vulnerability not being public. Which raised my question.
Google has a history of releasing vulnerability details and PoCs before updates could be rolled out; CVE-2020-17087 was perhaps the worst example.
The two weeks grace period is in place for your run-of-the-mill 90-day disclosure time period, but for actively exploited bugs that extension period is up to three days.
After digging in deeper, it appears they have the fix rolling out 6 days after the report came in, so they're within their own deadline I suppose. Their statement about publishing the details doesn't mention releasing the details in a month like their own projects would, though: (https://www.bleepingcomputer.com/news/google/google-fixes-an...):
> "Access to bug details and links may be kept restricted until a majority of users are updated with a fix," Google said. "We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven't yet fixed."
Google's inconsistencies when it comes to disclosure timelines irk me. "Wait until all third parties have also updated their software" isn't a luxury Google provides to others when they're the ones finding bugs. I'm all for swift disclosure timelines and pressure on manufacturers, but every Google team seems to have their own rules and guidelines written to serve themselves rather than general security.