Ideally tests should only test high level features and their invariants, so that a new implementation of some part should only break if the new implementation is actually broken. Unfortunately, many unit tests are written with a lot of assumptions, basically testing that you have kept the existing implementation. Especially unit tests that are written in a naive way is like this. In such a case you basically have to rewrite both the implementation _and_ all the unit tests. In that case the unit tests is in fact making change harder.
Unfortunately, much of current software development literature more or less encourage writing this kind of unit testing, in that they emphasize the importance of tests ("the definition of legacy code = no unit tests", "code without 100% test coverage is by definition low quality), while at the same time don't tell what a good unit test look like. This leads to brittle unit test suites using a lot of mocking etc, that is nearly impossible to not break if changing code at all.
Of cause good unit tests are not like this, but it means that both you and the GP can be right at times.
Unfortunately, much of current software development literature more or less encourage writing this kind of unit testing, in that they emphasize the importance of tests ("the definition of legacy code = no unit tests", "code without 100% test coverage is by definition low quality), while at the same time don't tell what a good unit test look like. This leads to brittle unit test suites using a lot of mocking etc, that is nearly impossible to not break if changing code at all.
Of cause good unit tests are not like this, but it means that both you and the GP can be right at times.