A nice thing about AI is it can write a bunch more unit tests than a human could (or would) in a short span of time, and often even fix the issues it encounters on its own. The ideal workflow is not just having the AI do code review, but also write and run a bunch of tests that confirm the code behaves as specified (assuming a clear spec).
If too many unit tests slowing down future refactoring is a problem (or e.g. AI writing tests that rely on implementation details), the extra AI-written tests can just be thrown away once the review process is complete.
I love having loads of unit tests that get regenerated whenever they're an inconvenience. There's a fantastic sense of progress from the size of the diff put up for review, plus you get to avoid writing boring old fashioned tests. Really cuts down on the time wasted on understanding the change you're making and leaves one a rich field of future work to enjoy.
You shouldn't need to write unit tests to understand the change you're making if you wrote a sufficiently detailed specification beforehand. Now, writing a sufficiently detailed spec is itself an art and a skill that takes practice, but ultimately when mastered it's much more efficient than writing a bunch of boilerplate tests that a machine's now perfectly capable of generating by itself.
Don't you have to review the tests to make sure they really meet the spec / test all the cases of the spec anyway? It feels a little fragile to have less oversight there compared to being able to talk to whoever wrote the test cases out or being that person yourself
Programming today still has "cruft", unit tests being an example. The platonic ideal is to have AI reduce the cruft so engineers can focus on the creativity and problem solving. In practice, AI does end up taking over the creative bits as people prompt at higher levels of abstraction.
Unit tests aren’t cruft. Unless you’re blindly adding tests. It’s often the easiest things to write as the structures are the same so you can copy paste code, adding harness,…
If writing tests is difficult that’s often a clear indication that your code has an issue architecture wise. If writing test is tedious, that can means you’re not familiar with the tooling or have no clear idea of the expected input~output ranges.
You don't need AI to generate a bunch of unit tests, you can just use a property-based testing framework (after defining properties to test) to randomly generate a bazillion inputs to your tests.
If too many unit tests slowing down future refactoring is a problem (or e.g. AI writing tests that rely on implementation details), the extra AI-written tests can just be thrown away once the review process is complete.