And typically longer compile times.
Most variables are split into ("phi") variants, for each assignment, and many more costly optimization steps are now possible.
True, an increase in optimizations will likely mean longer compile times, on the other hand, with better optimized code, the compiler itself (as it's written in Go) will also perform better, which may negate some of the increase in compile time.
One of the alluring things of SSA form is that many optimizations are much faster to execute on the form. The costly part is to raise the SSA form in the first place which in the standard implementation requires one to build a costly dominator tree.
You don't need to add every optimization known to man to a compiler, so you can sometimes keep a few of the important ones and then skip every other optimization. A priori, I'd guess SSA would speed up the compiler, which means you end up having a better budget for the more expensive optimizations.
As stated in [1] they use a variant of "Simple and Efficient Construction of Static Single Assignment Form" [2], which does not require a dominator tree (or a liveness analysis).
I think Wirth with his Pascal compiler had this as a rule. If you added an optimization (which takes additional time), it must speed up the compiler enough that compilation times are not longer.
It depends on the compiler. The great thing about SSA conversion is that you only have to do it once, whereas a classical def/use-chaining analysis may have to be redone many times in an old-style optimizer.
SSA is a conversion of the program into a representation of its data flow. I've used it in multiple compilers and found it to be a big win for easing other analyses (e.g., induction variable recognition become trivial) and reducing bugs due to the update problem.